aid
stringlengths 9
15
| mid
stringlengths 7
10
| abstract
stringlengths 78
2.56k
| related_work
stringlengths 92
1.77k
| ref_abstract
dict |
---|---|---|---|---|
1106.5128
|
2081058753
|
We develop local reasoning techniques for message passing concurrent programs based on ideas from separation logics and resource usage analysis. We extend processes with permission-resources and define a reduction semantics for this extended language. This provides a foundation for interpreting separation formulas for message-passing concurrency. We also define a sound proof system permitting us to infer satisfaction compositionally using local, separation-based reasoning.
|
Following @cite_8 , the use of separation logic to support local reasoning for concurrent programs has been studied intensively over the past few years for the shared-variable model of concurrency. The initial main idea of ownership transfer of resources between threads impacting upon local reasoning already appears in @cite_8 . This was then extended to co-exist with Rely Guarantee reasoning @cite_40 @cite_14 and recently refined through fractional permissions as Deny Guarantee reasoning @cite_11 . The latter is interesting to us as a means of widening our class of programs under analysis. For instance, @cite_25 uses this approach for dealing with dynamically allocated resource locks.
|
{
"cite_N": [
"@cite_14",
"@cite_8",
"@cite_40",
"@cite_25",
"@cite_11"
],
"mid": [
"1607674807",
"2121405115",
"1819989006",
"2155032935",
""
],
"abstract": [
"We study the relationship between Concurrent Separation Logic (CSL) and the assume-guarantee (A-G) method (a.k.a. rely-guarantee method). We show in three steps that CSL can be treated as a specialization of the A-G method for well-synchronized concurrent programs. First, we present an A-G based program logic for a low-level language with built-in locking primitives. Then we extend the program logic with explicit separation of \"private data\" and \"shared data\", which provides better memory modularity. Finally, we show that CSL (adapted for the low-level language) can be viewed as a specialization of the extended A-G logic by enforcing the invariant that \"shared resources are well-formed outside of critical regions\". This work can also be viewed as a different approach (from Brookes') to proving the soundness of CSL: our CSL inference rules are proved as lemmas in the A-G based logic, whose soundness is established following the syntactic approach to proving soundness of type systems.",
"In this paper we show how a resource-oriented logic, separation logic, can be used to reason about the usage of resources in concurrent programs.",
"In the quest for tractable methods for reasoning about concurrent algorithms both rely guarantee logic and separation logic have made great advances. They both seek to tame, or control, the complexity of concurrent interactions, but neither is the ultimate approach. Relyguarantee copes naturally with interference, but its specifications are complex because they describe the entire state. Conversely separation logic has difficulty dealing with interference, but its specifications are simpler because they describe only the relevant state that the program accesses. We propose a combined system which marries the two approaches. We can describe interference naturally (using a relation as in rely guarantee), and where there is no interference, we can reason locally (as in separation logic). We demonstrate the advantages of the combined approach by verifying a lock-coupling list algorithm, which actually disposes frees removed nodes.",
"We present a resource oriented program logic that is able to reason about concurrent heap-manipulating programs with unbounded numbers of dynamically-allocated locks and threads. The logic is inspired by concurrent separation logic, but handles these more realistic concurrency primitives. We demonstrate that the proposed logic allows local reasoning about programs for which there exists a notion of dynamic ownership of heap parts by locks and threads.",
""
]
}
|
1106.5112
|
1863666956
|
In this paper we examine the application of the random forest classifier for the all relevant feature selection problem. To this end we first examine two recently proposed all relevant feature selection algorithms, both being a random forest wrappers, on a series of synthetic data sets with varying size. We show that reasonable accuracy of predictions can be achieved and that heuristic algorithms that were designed to handle the all relevant problem, have performance that is close to that of the reference ideal algorithm. Then, we apply one of the algorithms to four families of semi-synthetic data sets to assess how the properties of particular data set influence results of feature selection. Finally we test the procedure using a well-known gene expression data set. The relevance of nearly all previously established important genes was confirmed, moreover the relevance of several new ones is discovered.
|
Bayesian network inference is often performed as a wrapper over Na " ve Bayes classifiers and could be used for all-relevant feature selection. However, since in all practical implementations the search for simple and previously postulated forms of node-node interactions @cite_19 @cite_9 @cite_8 , these methods are not suitable for finding the non-trivial attributes, what is a subject of this study.
|
{
"cite_N": [
"@cite_19",
"@cite_9",
"@cite_8"
],
"mid": [
"2097176879",
"",
"1982254517"
],
"abstract": [
"We analyze two different feature selection problems: finding a minimal feature set optimal for classification (MINIMAL-OPTIMAL) vs. finding all features relevant to the target variable (ALL-RELEVANT). The latter problem is motivated by recent applications within bioinformatics, particularly gene expression analysis. For both problems, we identify classes of data distributions for which there exist consistent, polynomial-time algorithms. We also prove that ALL-RELEVANT is much harder than MINIMAL-OPTIMAL and propose two consistent, polynomial-time algorithms. We argue that the distribution classes considered are reasonable in many practical cases, so that our results simplify feature selection in a wide range of machine learning tasks.",
"",
"We aim to identify the minimal subset of random variables that is relevant for probabilistic classification in data sets with many variables but few instances. A principled solution to this problem is to determine the Markov boundary of the class variable. In this paper, we propose a novel constraint-based Markov boundary discovery algorithm called MBOR with the objective of improving accuracy while still remaining scalable to very high dimensional data sets and theoretically correct under the so-called faithfulness condition. We report extensive empirical experiments on synthetic data sets scaling up to tens of thousand variables."
]
}
|
1106.5112
|
1863666956
|
In this paper we examine the application of the random forest classifier for the all relevant feature selection problem. To this end we first examine two recently proposed all relevant feature selection algorithms, both being a random forest wrappers, on a series of synthetic data sets with varying size. We show that reasonable accuracy of predictions can be achieved and that heuristic algorithms that were designed to handle the all relevant problem, have performance that is close to that of the reference ideal algorithm. Then, we apply one of the algorithms to four families of semi-synthetic data sets to assess how the properties of particular data set influence results of feature selection. Finally we test the procedure using a well-known gene expression data set. The relevance of nearly all previously established important genes was confirmed, moreover the relevance of several new ones is discovered.
|
The algorithm of Rogers & Gunn @cite_26 uses internals of construction for feature selection. It relies on a theoretical model giving an estimate of the information gain of a split done on a non-informative attribute, which, averaged over the forest, is used to test the relevance of original attributes. This method, while elegant, is not particularly good at discerning between relevant and irrelevant features. Authors present the results for the Madalon problem, where 130 features where deemed relevant by this algorithm at confidence level @math , whereas there are only 20 relevant features in this set. For a comparison, this problem is solved nearly perfectly by Boruta algorithm @cite_14 .
|
{
"cite_N": [
"@cite_14",
"@cite_26"
],
"mid": [
"2156665896",
"1536061269"
],
"abstract": [
"This article describes a R package Boruta, implementing a novel feature selection algorithm for finding emph all relevant variables . The algorithm is designed as a wrapper around a Random Forest classification algorithm. It iteratively removes the features which are proved by a statistical test to be less relevant than random probes. The Boruta package provides a convenient interface to the algorithm. The short description of the algorithm and examples of its application are presented.",
"It is known that feature selection and feature relevance can benefit the performance and interpretation of machine learning algorithms. Here we consider feature selection within a Random Forest framework. A feature selection technique is introduced that combines hypothesis testing with an approximation to the expected performance of an irrelevant feature during Random Forest construction. It is demonstrated that the lack of implicit feature selection within Random Forest has an adverse effect on the accuracy and efficiency of the algorithm. It is also shown that irrelevant features can slow the rate of error convergence and a theoretical justification of this effect is given."
]
}
|
1106.4700
|
1874924121
|
Static program verifiers such as Spec#, Dafny, jStar, and VeriFast define the state of the art in automated functional verification techniques. The next open challenges are to make verification tools usable even by programmers not fluent in formal techniques. This paper presents AutoProof, a verification tool that translates Eiffel programs to Boogie and uses the Boogie verifier to prove them. In an effort to be usable with real programs, AutoProof fully supports several advanced object-oriented features including polymorphism, inheritance, and function objects. AutoProof also adopts simple strategies to reduce the amount of annotations needed when verifying programs (e.g., frame conditions). The paper illustrates the main features of AutoProof's translation, including some whose implementation is underway, and demonstrates them with examples and a case study.
|
Spec # has shown the advantages of using an intermediate language for verification. Other tools such as Dafny @cite_0 and Chalice @cite_10 , and techniques based on Region Logic @cite_8 , follow this approach, and they also rely on Boogie as intermediate language and verification back-end, in the same way as does.
|
{
"cite_N": [
"@cite_0",
"@cite_10",
"@cite_8"
],
"mid": [
"2130427425",
"1555179958",
"1548974835"
],
"abstract": [
"Traditionally, the full verification of a program's functional correctness has been obtained with pen and paper or with interactive proof assistants, whereas only reduced verification tasks, such as extended static checking, have enjoyed the automation offered by satisfiability-modulo-theories (SMT) solvers. More recently, powerful SMT solvers and well-designed program verifiers are starting to break that tradition, thus reducing the effort involved in doing full verification. This paper gives a tour of the language and verifier Dafny, which has been used to verify the functional correctness of a number of challenging pointer-based programs. The paper describes the features incorporated in Dafny, illustrating their use by small examples and giving a taste of how they are coded for an SMT solver. As a larger case study, the paper shows the full functional specification of the Schorr-Waite algorithm in Dafny.",
"Advanced multi-threaded programs apply concurrency concepts in sophisticated ways. For instance, they use fine-grained locking to increase parallelism and change locking orders dynamically when data structures are being reorganized. This paper presents a sound and modular verification methodology that can handle advanced concurrency patterns in multi-threaded, object-based programs. The methodology is based on implicit dynamic frames and uses fractional permissions to support fine-grained locking. It supports concepts such as multi-object monitor invariants, thread-local and shared objects, thread pre- and postconditions, and deadlock prevention with a dynamically changeable locking order. The paper prescribes the generation of verification conditions in first-order logic, well-suited for scrutiny by off-the-shelf SMT solvers. A verifier for the methodology has been implemented for an experimental language, and has been used to verify several challenging examples including hand-over-hand locking for linked lists and a lock re-ordering algorithm.",
"Shared mutable objects pose grave challenges in reasoning, especially for data abstraction and modularity. This paper presents a novel logic for error-avoiding partial correctness of programs featuring shared mutable objects. Using a first order assertion language, the logic provides heap-local reasoning about mutation and separation, via ghost fields and variables of type region' (finite sets of object references). A new form of modifies clause specifies write, read, and allocation effects using region expressions; this supports effect masking and a frame rule that allows a command to read state on which the framed predicate depends. Soundness is proved using a standard program semantics. The logic facilitates heap-local reasoning about object invariants: disciplines such as ownership are expressible but not hard-wired in the logic."
]
}
|
1106.4700
|
1874924121
|
Static program verifiers such as Spec#, Dafny, jStar, and VeriFast define the state of the art in automated functional verification techniques. The next open challenges are to make verification tools usable even by programmers not fluent in formal techniques. This paper presents AutoProof, a verification tool that translates Eiffel programs to Boogie and uses the Boogie verifier to prove them. In an effort to be usable with real programs, AutoProof fully supports several advanced object-oriented features including polymorphism, inheritance, and function objects. AutoProof also adopts simple strategies to reduce the amount of annotations needed when verifying programs (e.g., frame conditions). The paper illustrates the main features of AutoProof's translation, including some whose implementation is underway, and demonstrates them with examples and a case study.
|
Separation logic @cite_7 is an extension of Hoare logic with connectives that define separation between regions of the heap, which provides an elegant approach to reasoning about programs with mutable data structures. Verification environments based on separation logic---such as jStar @cite_2 and VeriFast @cite_1 ---can verify advanced features such as usages of the visitor, observer, and factory design patterns. On the other hand, writing separation logic annotations requires considerably more expertise than using standard contracts embedded in the programming language; this makes tools based on separation logic more challenging to use by practitioners.
|
{
"cite_N": [
"@cite_1",
"@cite_7",
"@cite_2"
],
"mid": [
"1606142489",
"2130111506",
""
],
"abstract": [
"This paper describes the main features of VeriFast, a sound and modular program verifier for C and Java. VeriFast takes as input a number of source files annotated with method contracts written in separation logic, inductive data type and fixpoint definitions, lemma functions and proof steps. The verifier checks that (1) the program does not perform illegal operations such as dividing by zero or illegal memory accesses and (2) that the assumptions described in method contracts hold in each execution. Although VeriFast supports specifying and verifying deep data structure properties, it provides an interactive verification experience as verification times are consistently low and errors can be diagnosed using its symbolic debugger. VeriFast and a large number of example programs are available online at: http: www.cs.kuleuven.be bartj verifast",
"We investigate proof rules for information hiding, using the formalism of separation logic. In essence, we use the separating conjunction to partition the internal resources of a module from those accessed by the module's clients. The use of a logical connective gives rise to a form of dynamic partitioning, where we track the transfer of ownership of portions of heap storage between program components. It also enables us to enforce separation in the presence of mutable data structures with embedded addresses that may be aliased.",
""
]
}
|
1106.3725
|
1519445261
|
We investigate the problem of learning XML queries, path queries and tree pattern queries, from examples given by the user. A learning algorithm takes on the input a set of XML documents with nodes annotated by the user and returns a query that selects the nodes in a manner consistent with the annotation. We study two learning settings that differ with the types of annotations. In the first setting the user may only indicate required nodes that the query must return. In the second, more general, setting, the user may also indicate forbidden nodes that the query must not return. The query may or may not return any node with no annotation. We formalize what it means for a class of queries to be . One requirement is the existence of a learning algorithm that is sound i.e., always returns a query consistent with the examples given by the user. Furthermore, the learning algorithm should be complete i.e., able to produce every query with a sufficiently rich example. Other requirements involve tractability of learning and its robustness to nonessential examples. We show that the classes of simple path queries and path-subsumption-free tree queries are learnable from positive examples. The learnability of the full class of tree pattern queries (and the full class of path queries) remains an open question. We show also that adding negative examples to the picture renders the learning unfeasible. Published in ICDT 2012, Berlin.
|
Our research adheres to computational learning theory @cite_28 , a branch of machine learning, and in particular, to the area of language inference @cite_42 . Our learning framework is inspired by the one generally used for inference of languages of word and trees @cite_8 @cite_1 (see also @cite_37 for survey of the area). Analogous frameworks have been employed in the context of XML for learning of DTDs and XML Schemas @cite_15 @cite_39 , XML transformations @cite_24 , and @math -ary automata queries @cite_5 .
|
{
"cite_N": [
"@cite_37",
"@cite_8",
"@cite_28",
"@cite_42",
"@cite_1",
"@cite_39",
"@cite_24",
"@cite_5",
"@cite_15"
],
"mid": [
"2120437191",
"",
"1520252399",
"",
"2494400401",
"2569219525",
"2129010478",
"2020298184",
"2063985934"
],
"abstract": [
"The field of grammatical inference (also known as grammar induction) is transversal to a number of research areas including machine learning, formal language theory, syntactic and structural pattern recognition, computational linguistics, computational biology and speech recognition. There is no uniform literature on the subject and one can find many papers with original definitions or points of view. This makes research in this subject very hard, mainly for a beginner or someone who does not wish to become a specialist but just to find the most suitable ideas for his own research activity. The goal of this paper is to introduce a certain number of papers related with grammatical inference. Some of these papers are essential and should constitute a common background to research in the area, whereas others are specialized on particular problems or techniques, but can be of great help on specific tasks.",
"",
"The probably approximately correct learning model Occam's razor the Vapnik-Chervonenkis dimension weak and strong learning learning in the presence of noise inherent unpredictability reducibility in PAC learning learning finite automata by experimentation appendix - some tools for probabilistic analysis.",
"",
"",
"Inferring an appropriate DTD or XML Schema Definition (XSD) for a given collection of XML documents essentially reduces to learning deterministic regular expressions from sets of positive example words. Unfortunately, there is no algorithm capable of learning the complete class of deterministic regular expressions from positive examples only, as we will show. The regular expressions occurring in practical DTDs and XSDs, however, are such that every alphabet symbol occurs only a small number of times. As such, in practice it suffices to learn the subclass of deterministic regular expressions in which each alphabet symbol occurs at most k times, for some small k. We refer to such expressions as k-occurrence regular expressions (k-OREs for short). Motivated by this observation, we provide a probabilistic algorithm that learns k-OREs for increasing values of k, and selects the deterministic one that best describes the sample based on a Minimum Description Length argument. The effectiveness of the method is empirically validated both on real world and synthetic data. Furthermore, the method is shown to be conservative over the simpler classes of expressions considered in previous work.",
"A generalization from string to trees and from languages to translations is given of the classical result that any regular language can be learned from examples: it is shown that for any deterministic top-down tree transformation there exists a sample set of polynomial size (with respect to the minimal transducer) which allows to infer the translation. Until now, only for string transducers and for simple relabeling tree transducers, similar results had been known. Learning of deterministic top-down tree transducers (dtops) is far more involved because a dtop can copy, delete, and permute its input subtrees. Thus, complex dependencies of labeled input to output paths need to be maintained by the algorithm. First, a Myhill-Nerode theorem is presented for dtops, which is interesting on its own. This theorem is then used to construct a learning algorithm for dtops. Finally, it is shown how our result can be applied to xml transformations (e.g. xslt programs). For this, a new dtd-based encoding of unranked trees by ranked ones is presented. Over such encodings, dtops can realize many practically interesting xml transformations which cannot be realized on firstchild next-sibling encodings.",
"We develop new algorithms for learning monadic node selection queries in unranked trees from annotated examples, and apply them to visually interactive Web information extraction. We propose to represent monadic queries by bottom-up deterministic Node Selecting Tree Transducers (NSTTs), a particular class of tree automata that we introduce. We prove that deterministic NSTTs capture the class of queries definable in monadic second order logic (MSO) in trees, which Gottlob and Koch (2002) argue to have the right expressiveness for Web information extraction, and prove that monadic queries defined by NSTTs can be answered efficiently. We present a new polynomial time algorithm in RPNI-style that learns monadic queries defined by deterministic NSTTs from completely annotated examples, where all selected nodes are distinguished. In practice, users prefer to provide partial annotations. We propose to account for partial annotations by intelligent tree pruning heuristics. We introduce pruning NSTTs--a formalism that shares many advantages of NSTTs. This leads us to an interactive learning algorithm for monadic queries defined by pruning NSTTs, which satisfies a new formal active learning model in the style of Angluin (1987). We have implemented our interactive learning algorithm integrated it into a visually interactive Web information extraction system--called SQUIRREL--by plugging it into the Mozilla Web browser. Experiments on realistic Web documents confirm excellent quality with very few user interactions during wrapper induction.",
"We consider the problem of inferring a concise Document Type Definition (DTD) for a given set of XML-documents, a problem that basically reduces to learning concise regular expressions from positive examples strings. We identify two classes of concise regular expressions—the single occurrence regular expressions (SOREs) and the chain regular expressions (CHAREs)—that capture the far majority of expressions used in practical DTDs. For the inference of SOREs we present several algorithms that first infer an automaton for a given set of example strings and then translate that automaton to a corresponding SORE, possibly repairing the automaton when no equivalent SORE can be found. In the process, we introduce a novel automaton to regular expression rewrite technique which is of independent interest. When only a very small amount of XML data is available, however (for instance when the data is generated by Web service requests or by answers to queries), these algorithms produce regular expressions that are too specific. Therefore, we introduce a novel learning algorithm crx that directly infers CHAREs (which form a subclass of SOREs) without going through an automaton representation. We show that crx performs very well within its target class on very small datasets."
]
}
|
1106.3725
|
1519445261
|
We investigate the problem of learning XML queries, path queries and tree pattern queries, from examples given by the user. A learning algorithm takes on the input a set of XML documents with nodes annotated by the user and returns a query that selects the nodes in a manner consistent with the annotation. We study two learning settings that differ with the types of annotations. In the first setting the user may only indicate required nodes that the query must return. In the second, more general, setting, the user may also indicate forbidden nodes that the query must not return. The query may or may not return any node with no annotation. We formalize what it means for a class of queries to be . One requirement is the existence of a learning algorithm that is sound i.e., always returns a query consistent with the examples given by the user. Furthermore, the learning algorithm should be complete i.e., able to produce every query with a sufficiently rich example. Other requirements involve tractability of learning and its robustness to nonessential examples. We show that the classes of simple path queries and path-subsumption-free tree queries are learnable from positive examples. The learnability of the full class of tree pattern queries (and the full class of path queries) remains an open question. We show also that adding negative examples to the picture renders the learning unfeasible. Published in ICDT 2012, Berlin.
|
Our basic learning algorithm for unary embeddable path queries is inspired and can be seen as an extension of algorithms for inference of word patterns @cite_35 @cite_9 (see @cite_16 for a survey of the area). A word pattern is a word using extra wildcard characters. For instance, use a wildcard @math matching any nonempty string e.g., @math matches @math and @math but not @math , @math , and @math . use a wildcard @math that matches any (possibly empty) string e.g., @math matches @math and @math . To capture unary path queries we need to use the wildcard @math and another wildcard @math that matches a single letter, and then for instance the pattern @math corresponds to the path query @math when interpreted over paths of the input tree. We observe that @math is equivalent to @math and engineer our learning algorithm using the ideas behind the algorithms for inference of regular patters @cite_35 and extended regular patters @cite_9 .
|
{
"cite_N": [
"@cite_35",
"@cite_9",
"@cite_16"
],
"mid": [
"1965415591",
"1505882215",
""
],
"abstract": [
"Abstract Assume a finite alphabet of constant symbols and a disjoint infinite alphabet of variable symbols . A pattern is a non-null finite string of constant and variable symbols. The language of a pattern is all strings obtainable by substituting non-null strings of constant symbols for the variables of the pattern. A sample is a finite nonempty set of non-null strings of constant symbols. Given a sample S , a pattern p is descriptive of S provided the language of p contains S and does not properly contain the language of any other pattern that contains S . The computational problem of finding a pattern descriptive of a given sample is studied. The main result is a polynomial-time algorithm for the special case of patterns containing only one variable symbol (possibly occurring several times in the pattern). Several other results are proved concerning the class of languages generated by patterns and the problem of finding a descriptive pattern.",
"A pattern is a string of constant symbols and variable symbols. The language of a pattern p is the set of all strings obtained by substituting any non-empty constant string for each variable symbol in p. A regular pattern has at most one occurrence of each variable symbol. In this paper, we consider polynomial time inference from positive data for the class of extended regular pattern languages which are sets of all strings obtained by substituting any (possibly empty) constant string, instead of non-empty string. Our inference machine uses MINL calculation which finds a minimal language containing a given finite set of strings. The relation between MINL calculation for the class of extended regular pattern languages and the longest common subsequence problem is also discussed.",
""
]
}
|
1106.3725
|
1519445261
|
We investigate the problem of learning XML queries, path queries and tree pattern queries, from examples given by the user. A learning algorithm takes on the input a set of XML documents with nodes annotated by the user and returns a query that selects the nodes in a manner consistent with the annotation. We study two learning settings that differ with the types of annotations. In the first setting the user may only indicate required nodes that the query must return. In the second, more general, setting, the user may also indicate forbidden nodes that the query must not return. The query may or may not return any node with no annotation. We formalize what it means for a class of queries to be . One requirement is the existence of a learning algorithm that is sound i.e., always returns a query consistent with the examples given by the user. Furthermore, the learning algorithm should be complete i.e., able to produce every query with a sufficiently rich example. Other requirements involve tractability of learning and its robustness to nonessential examples. We show that the classes of simple path queries and path-subsumption-free tree queries are learnable from positive examples. The learnability of the full class of tree pattern queries (and the full class of path queries) remains an open question. We show also that adding negative examples to the picture renders the learning unfeasible. Published in ICDT 2012, Berlin.
|
Learning of unary XML queries has been pursued with the use of node selecting tree automata @cite_5 , with extensions allowing to infer @math -ary queries @cite_43 , take advantage of schema information @cite_38 , and use pruning techniques to handle incompletely annotated documents @cite_5 . The main advantage of using node selecting tree automata is their expressive power. Node selecting tree automata capture exactly the class of @math -ary MSO tree queries @cite_41 @cite_43 , which properly includes twig and path queries. However, tree automata have several drawbacks which may render them unsuitable for learning in certain scenarios: this is a heavy querying formalism with little support from the existing infrastructure and it does not allow an easy visualization of the inferred query.
|
{
"cite_N": [
"@cite_41",
"@cite_5",
"@cite_43",
"@cite_38"
],
"mid": [
"2035020702",
"2020298184",
"1530285431",
"1556908210"
],
"abstract": [
"Many of the important concepts and results of conventional finite automata theory are developed for a generalization in which finite algebras take the place of finite automata. The standard closure theorems are proved for the class of sets “recognizable” by finite algebras, and a generalization of Kleene's regularity theory is presented. The theorems of the generalized theory are then applied to obtain a positive solution to a decision problem of second-order logic.",
"We develop new algorithms for learning monadic node selection queries in unranked trees from annotated examples, and apply them to visually interactive Web information extraction. We propose to represent monadic queries by bottom-up deterministic Node Selecting Tree Transducers (NSTTs), a particular class of tree automata that we introduce. We prove that deterministic NSTTs capture the class of queries definable in monadic second order logic (MSO) in trees, which Gottlob and Koch (2002) argue to have the right expressiveness for Web information extraction, and prove that monadic queries defined by NSTTs can be answered efficiently. We present a new polynomial time algorithm in RPNI-style that learns monadic queries defined by deterministic NSTTs from completely annotated examples, where all selected nodes are distinguished. In practice, users prefer to provide partial annotations. We propose to account for partial annotations by intelligent tree pruning heuristics. We introduce pruning NSTTs--a formalism that shares many advantages of NSTTs. This leads us to an interactive learning algorithm for monadic queries defined by pruning NSTTs, which satisfies a new formal active learning model in the style of Angluin (1987). We have implemented our interactive learning algorithm integrated it into a visually interactive Web information extraction system--called SQUIRREL--by plugging it into the Mozilla Web browser. Experiments on realistic Web documents confirm excellent quality with very few user interactions during wrapper induction.",
"We present the first algorithm for learning n-ary node selection queries in trees from completely annotated examples by methods of grammatical inference. We propose to represent n-ary queries by deterministic n-ary node selecting tree transducers (n-NSTTs). These are tree automata that capture the class of monadic second-order definable n-ary queries. We show that n-NSTTs defined polynomially bounded n-ary queries can be learned from polynomial time and data. An application in Web information extraction yields encouraging results.",
"The induction of monadic node selecting queries from partially annotated XML-trees is a key task in Web information extraction. We show how to integrate schema guidance into an RPNI-based learning algorithm, in which monadic queries are represented by pruning node selecting tree transducers. We present experimental results on schema guidance by the DTD of HTML."
]
}
|
1106.3725
|
1519445261
|
We investigate the problem of learning XML queries, path queries and tree pattern queries, from examples given by the user. A learning algorithm takes on the input a set of XML documents with nodes annotated by the user and returns a query that selects the nodes in a manner consistent with the annotation. We study two learning settings that differ with the types of annotations. In the first setting the user may only indicate required nodes that the query must return. In the second, more general, setting, the user may also indicate forbidden nodes that the query must not return. The query may or may not return any node with no annotation. We formalize what it means for a class of queries to be . One requirement is the existence of a learning algorithm that is sound i.e., always returns a query consistent with the examples given by the user. Furthermore, the learning algorithm should be complete i.e., able to produce every query with a sufficiently rich example. Other requirements involve tractability of learning and its robustness to nonessential examples. We show that the classes of simple path queries and path-subsumption-free tree queries are learnable from positive examples. The learnability of the full class of tree pattern queries (and the full class of path queries) remains an open question. We show also that adding negative examples to the picture renders the learning unfeasible. Published in ICDT 2012, Berlin.
|
Although, the class of twig queries is properly included by the class of MSO queries and path queries are captured by regular languages, using automata-based techniques to infer the query and then convert it to twigs is unlikely to be successful because automata translation is a notoriously difficult task and typically leads to significant blowup @cite_6 and it is generally considered beneficial to avoid it @cite_3 . An alternative approach, along the lines of @cite_15 , would be to define a set of structural restrictions on the automaton that would ensure an easy translation to twig queries and enforce those conditions during inference. However, such restrictions would need to be very strong, at least for twig queries, and this approach would require significant modification of the inference algorithm, to the point where it would constitute a new algorithm.
|
{
"cite_N": [
"@cite_15",
"@cite_3",
"@cite_6"
],
"mid": [
"2063985934",
"",
"2066064214"
],
"abstract": [
"We consider the problem of inferring a concise Document Type Definition (DTD) for a given set of XML-documents, a problem that basically reduces to learning concise regular expressions from positive examples strings. We identify two classes of concise regular expressions—the single occurrence regular expressions (SOREs) and the chain regular expressions (CHAREs)—that capture the far majority of expressions used in practical DTDs. For the inference of SOREs we present several algorithms that first infer an automaton for a given set of example strings and then translate that automaton to a corresponding SORE, possibly repairing the automaton when no equivalent SORE can be found. In the process, we introduce a novel automaton to regular expression rewrite technique which is of independent interest. When only a very small amount of XML data is available, however (for instance when the data is generated by Web service requests or by answers to queries), these algorithms produce regular expressions that are too specific. Therefore, we introduce a novel learning algorithm crx that directly infers CHAREs (which form a subclass of SOREs) without going through an automaton representation. We show that crx performs very well within its target class on very small datasets.",
"",
"Several measures of the complexity of a regular expression are defined. (Star height and number of alphabetical symbols are two of them.) Upper and lower estimates for the complexities of expressions for certain sets of paths on graphs are derived."
]
}
|
1106.3725
|
1519445261
|
We investigate the problem of learning XML queries, path queries and tree pattern queries, from examples given by the user. A learning algorithm takes on the input a set of XML documents with nodes annotated by the user and returns a query that selects the nodes in a manner consistent with the annotation. We study two learning settings that differ with the types of annotations. In the first setting the user may only indicate required nodes that the query must return. In the second, more general, setting, the user may also indicate forbidden nodes that the query must not return. The query may or may not return any node with no annotation. We formalize what it means for a class of queries to be . One requirement is the existence of a learning algorithm that is sound i.e., always returns a query consistent with the examples given by the user. Furthermore, the learning algorithm should be complete i.e., able to produce every query with a sufficiently rich example. Other requirements involve tractability of learning and its robustness to nonessential examples. We show that the classes of simple path queries and path-subsumption-free tree queries are learnable from positive examples. The learnability of the full class of tree pattern queries (and the full class of path queries) remains an open question. We show also that adding negative examples to the picture renders the learning unfeasible. Published in ICDT 2012, Berlin.
|
Methods used for inference of languages represented by automata differ from the methods used in our learning algorithms. An automata-based inference typically begins by constructing an automaton recognizing exactly the set of positive examples, which is then generalized by a series of generalization operation e.g., fusions of pairs of states. To avoid overgeneralization of the automata, negative examples are used to filter only consistent generalizations operations @cite_2 , and if negative examples are not available, structural properties of the automata class can be used to pilot the generalization process @cite_19 @cite_32 @cite_39 . Our algorithms, similarly to word pattern inference algorithms @cite_35 @cite_9 , begin with the universal query and iteratively specialize the query by incorporating subfragments common to all positive examples.
|
{
"cite_N": [
"@cite_35",
"@cite_9",
"@cite_32",
"@cite_39",
"@cite_19",
"@cite_2"
],
"mid": [
"1965415591",
"1505882215",
"2031469331",
"2569219525",
"2092386826",
"140384553"
],
"abstract": [
"Abstract Assume a finite alphabet of constant symbols and a disjoint infinite alphabet of variable symbols . A pattern is a non-null finite string of constant and variable symbols. The language of a pattern is all strings obtainable by substituting non-null strings of constant symbols for the variables of the pattern. A sample is a finite nonempty set of non-null strings of constant symbols. Given a sample S , a pattern p is descriptive of S provided the language of p contains S and does not properly contain the language of any other pattern that contains S . The computational problem of finding a pattern descriptive of a given sample is studied. The main result is a polynomial-time algorithm for the special case of patterns containing only one variable symbol (possibly occurring several times in the pattern). Several other results are proved concerning the class of languages generated by patterns and the problem of finding a descriptive pattern.",
"A pattern is a string of constant symbols and variable symbols. The language of a pattern p is the set of all strings obtained by substituting any non-empty constant string for each variable symbol in p. A regular pattern has at most one occurrence of each variable symbol. In this paper, we consider polynomial time inference from positive data for the class of extended regular pattern languages which are sets of all strings obtained by substituting any (possibly empty) constant string, instead of non-empty string. Our inference machine uses MINL calculation which finds a minimal language containing a given finite set of strings. The relation between MINL calculation for the class of extended regular pattern languages and the longest common subsequence problem is also discussed.",
"The inductive inference of the class of k-testable languages in the strict sense (k-TSSL) is considered. A k-TSSL is essentially defined by a finite set of substrings of length k that are permitted to appear in the strings of the language. Given a positive sample R of strings of an unknown language, a deterministic finite-state automation that recognizes the smallest k-TSSL containing R is obtained. The inferred automation is shown to have a number of transitions bounded by O(m) where m is the number of substrings defining this k-TSSL, and the inference algorithm works in O(kn log m) where n is the sum of the lengths of all the strings in R. The proposed methods are illustrated through syntactic pattern recognition experiments in which a number of strings generated by ten given (source) non-k-TSSL grammars are used to infer ten k-TSSL stochastic automata, which are further used to classify new strings generated by the same source grammars. The results of these experiments are consistent with the theory and show the ability of (stochastic) k-TSSLs to approach other classes of regular languages. >",
"Inferring an appropriate DTD or XML Schema Definition (XSD) for a given collection of XML documents essentially reduces to learning deterministic regular expressions from sets of positive example words. Unfortunately, there is no algorithm capable of learning the complete class of deterministic regular expressions from positive examples only, as we will show. The regular expressions occurring in practical DTDs and XSDs, however, are such that every alphabet symbol occurs only a small number of times. As such, in practice it suffices to learn the subclass of deterministic regular expressions in which each alphabet symbol occurs at most k times, for some small k. We refer to such expressions as k-occurrence regular expressions (k-OREs for short). Motivated by this observation, we provide a probabilistic algorithm that learns k-OREs for increasing values of k, and selects the deterministic one that best describes the sample based on a Minimum Description Length argument. The effectiveness of the method is empirically validated both on real world and synthetic data. Furthermore, the method is shown to be conservative over the simpler classes of expressions considered in previous work.",
"",
""
]
}
|
1106.3725
|
1519445261
|
We investigate the problem of learning XML queries, path queries and tree pattern queries, from examples given by the user. A learning algorithm takes on the input a set of XML documents with nodes annotated by the user and returns a query that selects the nodes in a manner consistent with the annotation. We study two learning settings that differ with the types of annotations. In the first setting the user may only indicate required nodes that the query must return. In the second, more general, setting, the user may also indicate forbidden nodes that the query must not return. The query may or may not return any node with no annotation. We formalize what it means for a class of queries to be . One requirement is the existence of a learning algorithm that is sound i.e., always returns a query consistent with the examples given by the user. Furthermore, the learning algorithm should be complete i.e., able to produce every query with a sufficiently rich example. Other requirements involve tractability of learning and its robustness to nonessential examples. We show that the classes of simple path queries and path-subsumption-free tree queries are learnable from positive examples. The learnability of the full class of tree pattern queries (and the full class of path queries) remains an open question. We show also that adding negative examples to the picture renders the learning unfeasible. Published in ICDT 2012, Berlin.
|
XLearner @cite_26 is a practical system that infers XQuery programs. It uses Angluin’s DFA inference algorithm @cite_34 to construct the XPath components of the XQuery program. The system uses direct user interaction, essentially equivalence and membership queries, to refine the inferred query. Because of that the learning framework, called the @cite_34 , is different from ours and allows to infer more powerful queries. We also point out that learning twigs is not feasible with equivalence queries only @cite_21 .
|
{
"cite_N": [
"@cite_26",
"@cite_21",
"@cite_34"
],
"mid": [
"2145051847",
"119375009",
"1989445634"
],
"abstract": [
"We present XLearner, a novel tool that helps the rapid development of XML mapping queries written in XQuery. XLearner is novel in that it learns XQuery queries consistent with given examples (fragments) of intended query results. XLearner combines known learning techniques, incorporates mechanisms to cope with issues specific to the XQuery learning context, and provides a systematic way for the semiautomatic development of queries. We describe the XLearner system. It presents algorithms for learning various classes of XQuery, shows that a minor extension gives the system a practical expressive power, and reports experimental results to demonstrate how XLearner outputs reasonably complicated queries with only a small number of interactions with the user.",
"This work analyzes the application of active learning using example-based queries to the problem of constructing an XPath expression from visual interaction with an human user.",
"The problem of identifying an unknown regular set from examples of its members and nonmembers is addressed. It is assumed that the regular set is presented by a minimaMy adequate Teacher, which can answer membership queries about the set and can also test a conjecture and indicate whether it is equal to the unknown set and provide a counterexample if not. (A counterexample is a string in the symmetric difference of the correct set and the conjectured set.) A learning algorithm L* is described that correctly learns any regular set from any minimally adequate Teacher in time polynomial in the number of states of the minimum dfa for the set and the maximum length of any counterexample provided by the Teacher. It is shown that in a stochastic setting the ability of the Teacher to test conjectures may be replaced by a random sampling oracle, EX( ). A polynomial-time learning algorithm is shown for a particular problem of context-free language identification."
]
}
|
1106.3725
|
1519445261
|
We investigate the problem of learning XML queries, path queries and tree pattern queries, from examples given by the user. A learning algorithm takes on the input a set of XML documents with nodes annotated by the user and returns a query that selects the nodes in a manner consistent with the annotation. We study two learning settings that differ with the types of annotations. In the first setting the user may only indicate required nodes that the query must return. In the second, more general, setting, the user may also indicate forbidden nodes that the query must not return. The query may or may not return any node with no annotation. We formalize what it means for a class of queries to be . One requirement is the existence of a learning algorithm that is sound i.e., always returns a query consistent with the examples given by the user. Furthermore, the learning algorithm should be complete i.e., able to produce every query with a sufficiently rich example. Other requirements involve tractability of learning and its robustness to nonessential examples. We show that the classes of simple path queries and path-subsumption-free tree queries are learnable from positive examples. The learnability of the full class of tree pattern queries (and the full class of path queries) remains an open question. We show also that adding negative examples to the picture renders the learning unfeasible. Published in ICDT 2012, Berlin.
|
propose learning of @math -contextual tree languages to infer queries for web wrappers @cite_20 . @math -contextual tree languages form a subclass of regular tree languages that allows to specify conditions on the nodes of the tree at depth up to @math and each condition involves exactly @math subsequent children of a node. Because only nodes at bounded depth can be inspected and the relative order among children is used, @math -contextual tree languages are incomparable with twig queries which can inspect nodes at arbitrary depths but ignore the relative order of nodes.
|
{
"cite_N": [
"@cite_20"
],
"mid": [
"2126183407"
],
"abstract": [
"This paper introduces a novel method for learning a wrapper for extraction of information from web pages, based upon (k,l)-contextual tree languages. It also introduces a method to learn good values of k and l based on a few positive and negative examples. Finally, it describes how the algorithm can be integrated in a tool for information extraction."
]
}
|
1106.3373
|
2004526834
|
Orthogonal Matching Pursuit (OMP) is a canonical greedy pursuit algorithm for sparse approximation. Previous studies of OMP have considered the recovery of a sparse signal through Φ and y = Φx + b, where is a matrix with more columns than rows and denotes the measurement noise. In this paper, based on Restricted Isometry Property (RIP), the performance of OMP is analyzed under general perturbations, which means both y and Φ are perturbed. Though the exact recovery of an almost sparse signal x is no longer feasible, the main contribution reveals that the support set of the best k-term approximation of x can be recovered under reasonable conditions. The error bound between x and the estimation of OMP is also derived. By constructing an example it is also demonstrated that the sufficient conditions for support recovery of the best k-term approximation of are rather tight. When x is strong-decaying, it is proved that the sufficient conditions for support recovery of the best k-term approximation of x can be relaxed, and the support can even be recovered in the order of the entries' magnitude. Our results are also compared in detail with some related previous ones.
|
Although Theorem 5.1 in @cite_22 is coherence-based while Corollary 1 @math is RIC-based, they both provide conditions for successful support recovery under measurement noise, based on which the recovery error is further estimated. The comparisons are conducted from two aspects.
|
{
"cite_N": [
"@cite_22"
],
"mid": [
"2097323375"
],
"abstract": [
"Overcomplete representations are attracting interest in signal processing theory, particularly due to their potential to generate sparse representations of signals. However, in general, the problem of finding sparse representations must be unstable in the presence of noise. This paper establishes the possibility of stable recovery under a combination of sufficient sparsity and favorable structure of the overcomplete system. Considering an ideal underlying signal that has a sufficiently sparse representation, it is assumed that only a noisy version of it can be observed. Assuming further that the overcomplete system is incoherent, it is shown that the optimally sparse approximation to the noisy data differs from the optimally sparse decomposition of the ideal noiseless signal by at most a constant multiple of the noise level. As this optimal-sparsity method requires heavy (combinatorial) computational effort, approximation algorithms are considered. It is shown that similar stability is also available using the basis and the matching pursuit algorithms. Furthermore, it is shown that these methods result in sparse approximation of the noisy data that contains only terms also appearing in the unique sparsest representation of the ideal noiseless sparse signal."
]
}
|
1106.3373
|
2004526834
|
Orthogonal Matching Pursuit (OMP) is a canonical greedy pursuit algorithm for sparse approximation. Previous studies of OMP have considered the recovery of a sparse signal through Φ and y = Φx + b, where is a matrix with more columns than rows and denotes the measurement noise. In this paper, based on Restricted Isometry Property (RIP), the performance of OMP is analyzed under general perturbations, which means both y and Φ are perturbed. Though the exact recovery of an almost sparse signal x is no longer feasible, the main contribution reveals that the support set of the best k-term approximation of x can be recovered under reasonable conditions. The error bound between x and the estimation of OMP is also derived. By constructing an example it is also demonstrated that the sufficient conditions for support recovery of the best k-term approximation of are rather tight. When x is strong-decaying, it is proved that the sufficient conditions for support recovery of the best k-term approximation of x can be relaxed, and the support can even be recovered in the order of the entries' magnitude. Our results are also compared in detail with some related previous ones.
|
First, consider the ratio of the upper bounds on the recovery error in ) and ): @math According to Proposition 4.1 in @cite_25 , @math , and thus @math . This means that the error bound given by Corollary 1 @math is at least as good as that in [28, Th.5.1].
|
{
"cite_N": [
"@cite_25"
],
"mid": [
"2161431029"
],
"abstract": [
"This paper considers constrained lscr1 minimization methods in a unified framework for the recovery of high-dimensional sparse signals in three settings: noiseless, bounded error, and Gaussian noise. Both lscr1 minimization with an lscrinfin constraint (Dantzig selector) and lscr1 minimization under an llscr2 constraint are considered. The results of this paper improve the existing results in the literature by weakening the conditions and tightening the error bounds. The improvement on the conditions shows that signals with larger support can be recovered accurately. In particular, our results illustrate the relationship between lscr1 minimization with an llscr2 constraint and lscr1 minimization with an lscrinfin constraint. This paper also establishes connections between restricted isometry property and the mutual incoherence property. Some results of Candes, Romberg, and Tao (2006), Candes and Tao (2007), and Donoho, Elad, and Temlyakov (2006) are extended."
]
}
|
1106.3373
|
2004526834
|
Orthogonal Matching Pursuit (OMP) is a canonical greedy pursuit algorithm for sparse approximation. Previous studies of OMP have considered the recovery of a sparse signal through Φ and y = Φx + b, where is a matrix with more columns than rows and denotes the measurement noise. In this paper, based on Restricted Isometry Property (RIP), the performance of OMP is analyzed under general perturbations, which means both y and Φ are perturbed. Though the exact recovery of an almost sparse signal x is no longer feasible, the main contribution reveals that the support set of the best k-term approximation of x can be recovered under reasonable conditions. The error bound between x and the estimation of OMP is also derived. By constructing an example it is also demonstrated that the sufficient conditions for support recovery of the best k-term approximation of are rather tight. When x is strong-decaying, it is proved that the sufficient conditions for support recovery of the best k-term approximation of x can be relaxed, and the support can even be recovered in the order of the entries' magnitude. Our results are also compared in detail with some related previous ones.
|
Second, consider the sufficient conditions for successful support recovery of the two results. Direct comparison between ) and ) is difficult since as far as we know, there is no clear comparison between @math and @math for arbitrary sensing matrix. For simplicity, consider the scenario that the sensing matrix is Gaussian, and @math , @math , and @math increase in a proportional manner, i.e. @math and @math as @math , where @math are two constants. Results in @cite_36 show that there exists a constant @math such that @math with high probability. Another result in @cite_8 reveals that @math holds with high probability where @math is a constant. Thus @math with high probability. Inequality ) implies that and the following inequality implies ). Since the bound in ) decreases with a higher order than that in ) as @math increases, the sufficient condition ) is more relaxed in this sense.
|
{
"cite_N": [
"@cite_36",
"@cite_8"
],
"mid": [
"2018429487",
"2127271355"
],
"abstract": [
"Compressed sensing (CS) seeks to recover an unknown vector with @math entries by making far fewer than @math measurements; it posits that the number of CS measurements should be comparable to the information content of the vector, not simply @math . CS combines directly the important task of compression with the measurement task. Since its introduction in 2004 there have been hundreds of papers on CS, a large fraction of which develop algorithms to recover a signal from its compressed measurements. Because of the paradoxical nature of CS—exact reconstruction from seemingly undersampled measurements—it is crucial for acceptance of an algorithm that rigorous analyses verify the degree of undersampling the algorithm permits. The restricted isometry property (RIP) has become the dominant tool used for the analysis in such cases. We present here an asymmetric form of RIP that gives tighter bounds than the usual symmetric one. We give the best known bounds on the RIP constants for matrices from the Gaussian ensemble. Our derivations illustrate the way in which the combinatorial nature of CS is controlled. Our quantitative bounds on the RIP allow precise statements as to how aggressively a signal can be undersampled, the essential question for practitioners. We also document the extent to which RIP gives precise information about the true performance limits of CS, by comparison with approaches from high-dimensional geometry.",
"This paper demonstrates theoretically and empirically that a greedy algorithm called orthogonal matching pursuit (OMP) can reliably recover a signal with m nonzero entries in dimension d given O(m ln d) random linear measurements of that signal. This is a massive improvement over previous results, which require O(m2) measurements. The new results for OMP are comparable with recent results for another approach called basis pursuit (BP). In some settings, the OMP algorithm is faster and easier to implement, so it is an attractive alternative to BP for signal recovery problems."
]
}
|
1106.3373
|
2004526834
|
Orthogonal Matching Pursuit (OMP) is a canonical greedy pursuit algorithm for sparse approximation. Previous studies of OMP have considered the recovery of a sparse signal through Φ and y = Φx + b, where is a matrix with more columns than rows and denotes the measurement noise. In this paper, based on Restricted Isometry Property (RIP), the performance of OMP is analyzed under general perturbations, which means both y and Φ are perturbed. Though the exact recovery of an almost sparse signal x is no longer feasible, the main contribution reveals that the support set of the best k-term approximation of x can be recovered under reasonable conditions. The error bound between x and the estimation of OMP is also derived. By constructing an example it is also demonstrated that the sufficient conditions for support recovery of the best k-term approximation of are rather tight. When x is strong-decaying, it is proved that the sufficient conditions for support recovery of the best k-term approximation of x can be relaxed, and the support can even be recovered in the order of the entries' magnitude. Our results are also compared in detail with some related previous ones.
|
In @cite_3 , it is proved that for @math process, the support of a @math -sparse signal @math can be recovered, provided that @math and @math . By comparison, it is shown that Corollary @math is at least as good as this conclusion.
|
{
"cite_N": [
"@cite_3"
],
"mid": [
"2028878677"
],
"abstract": [
"Orthogonal matching pursuit (OMP) is a commonly used algorithm for recovery sparse signals due to its low complexity and simple implementation. We analyze the convergence property of OMP based on the restricted isometry property (RIP), and show that the OMP algorithm can exactly recover an arbitrary K-sparse signal using K steps provided that the sampling matrix Φ satisfies the RIP with parameter . In addition, we also give the convergence analysis of OMP for the case of inaccurate measurements. Moreover, a variant of OMP, referred to as multi-candidate OMP (MOMP) algorithm, is proposed to recover sparse signals, which can further reduce the computational complexity of OMP. The key point of MOMP is that at each step it selects multi-candidates adding to the optimal atom set, whilst OMP only selects one atom. We also present the convergence analysis of MOMP using the RIP. Finally, we testify the performance of the proposed algorithm using several numerical experiments."
]
}
|
1106.3373
|
2004526834
|
Orthogonal Matching Pursuit (OMP) is a canonical greedy pursuit algorithm for sparse approximation. Previous studies of OMP have considered the recovery of a sparse signal through Φ and y = Φx + b, where is a matrix with more columns than rows and denotes the measurement noise. In this paper, based on Restricted Isometry Property (RIP), the performance of OMP is analyzed under general perturbations, which means both y and Φ are perturbed. Though the exact recovery of an almost sparse signal x is no longer feasible, the main contribution reveals that the support set of the best k-term approximation of x can be recovered under reasonable conditions. The error bound between x and the estimation of OMP is also derived. By constructing an example it is also demonstrated that the sufficient conditions for support recovery of the best k-term approximation of are rather tight. When x is strong-decaying, it is proved that the sufficient conditions for support recovery of the best k-term approximation of x can be relaxed, and the support can even be recovered in the order of the entries' magnitude. Our results are also compared in detail with some related previous ones.
|
First, let @math satisfy @math , i.e. @math Consider the ratio of the required upper bound of @math in the result of @cite_3 to that in Corollary @math : @math It can be concluded from @math that @math , which means that the requirement of @math in Corollary @math is more relaxed.
|
{
"cite_N": [
"@cite_3"
],
"mid": [
"2028878677"
],
"abstract": [
"Orthogonal matching pursuit (OMP) is a commonly used algorithm for recovery sparse signals due to its low complexity and simple implementation. We analyze the convergence property of OMP based on the restricted isometry property (RIP), and show that the OMP algorithm can exactly recover an arbitrary K-sparse signal using K steps provided that the sampling matrix Φ satisfies the RIP with parameter . In addition, we also give the convergence analysis of OMP for the case of inaccurate measurements. Moreover, a variant of OMP, referred to as multi-candidate OMP (MOMP) algorithm, is proposed to recover sparse signals, which can further reduce the computational complexity of OMP. The key point of MOMP is that at each step it selects multi-candidates adding to the optimal atom set, whilst OMP only selects one atom. We also present the convergence analysis of MOMP using the RIP. Finally, we testify the performance of the proposed algorithm using several numerical experiments."
]
}
|
1106.3373
|
2004526834
|
Orthogonal Matching Pursuit (OMP) is a canonical greedy pursuit algorithm for sparse approximation. Previous studies of OMP have considered the recovery of a sparse signal through Φ and y = Φx + b, where is a matrix with more columns than rows and denotes the measurement noise. In this paper, based on Restricted Isometry Property (RIP), the performance of OMP is analyzed under general perturbations, which means both y and Φ are perturbed. Though the exact recovery of an almost sparse signal x is no longer feasible, the main contribution reveals that the support set of the best k-term approximation of x can be recovered under reasonable conditions. The error bound between x and the estimation of OMP is also derived. By constructing an example it is also demonstrated that the sufficient conditions for support recovery of the best k-term approximation of are rather tight. When x is strong-decaying, it is proved that the sufficient conditions for support recovery of the best k-term approximation of x can be relaxed, and the support can even be recovered in the order of the entries' magnitude. Our results are also compared in detail with some related previous ones.
|
Despite the difference in requirements, the recovery errors given in Theorem 2 of @cite_3 and Corollary @math are the same, since these errors are both derived when the support set of the sparse signal is perfectly recovered.
|
{
"cite_N": [
"@cite_3"
],
"mid": [
"2028878677"
],
"abstract": [
"Orthogonal matching pursuit (OMP) is a commonly used algorithm for recovery sparse signals due to its low complexity and simple implementation. We analyze the convergence property of OMP based on the restricted isometry property (RIP), and show that the OMP algorithm can exactly recover an arbitrary K-sparse signal using K steps provided that the sampling matrix Φ satisfies the RIP with parameter . In addition, we also give the convergence analysis of OMP for the case of inaccurate measurements. Moreover, a variant of OMP, referred to as multi-candidate OMP (MOMP) algorithm, is proposed to recover sparse signals, which can further reduce the computational complexity of OMP. The key point of MOMP is that at each step it selects multi-candidates adding to the optimal atom set, whilst OMP only selects one atom. We also present the convergence analysis of MOMP using the RIP. Finally, we testify the performance of the proposed algorithm using several numerical experiments."
]
}
|
1106.3373
|
2004526834
|
Orthogonal Matching Pursuit (OMP) is a canonical greedy pursuit algorithm for sparse approximation. Previous studies of OMP have considered the recovery of a sparse signal through Φ and y = Φx + b, where is a matrix with more columns than rows and denotes the measurement noise. In this paper, based on Restricted Isometry Property (RIP), the performance of OMP is analyzed under general perturbations, which means both y and Φ are perturbed. Though the exact recovery of an almost sparse signal x is no longer feasible, the main contribution reveals that the support set of the best k-term approximation of x can be recovered under reasonable conditions. The error bound between x and the estimation of OMP is also derived. By constructing an example it is also demonstrated that the sufficient conditions for support recovery of the best k-term approximation of are rather tight. When x is strong-decaying, it is proved that the sufficient conditions for support recovery of the best k-term approximation of x can be relaxed, and the support can even be recovered in the order of the entries' magnitude. Our results are also compared in detail with some related previous ones.
|
In @cite_6 , the main result concerns the error estimation for OMP. It is proved that where @math is a non-sparse signal we wish to recover, @math is the estimated solution via OMP in the @math th iteration, @math is the @math error between the best @math -term approximation of @math and @math , and @math is the RIC of order @math . This conclusion gives an upper bound on the error between the original signal and the estimated result of any iteration in OMP.
|
{
"cite_N": [
"@cite_6"
],
"mid": [
"2057380028"
],
"abstract": [
"In this paper we investigate the efficiency of the Orthogonal Matching Pursuit algorithm (OMP) for random dictionaries. We concentrate on dictionaries satisfying the Restricted Isometry Property. We also introduce a stronger Homogenous Restricted Isometry Property which we show is satisfied with overwhelming probability for random dictionaries used in compressed sensing. For these dictionaries we obtain upper estimates for the error of approximation by OMP in terms of the error of the best n-term approximation (Lebesgue-type inequalities). We also present and discuss some open problems about OMP. This is a development of recent results obtained by D.L. Donoho, M. Elad and V.N. Temlyakov."
]
}
|
1106.3373
|
2004526834
|
Orthogonal Matching Pursuit (OMP) is a canonical greedy pursuit algorithm for sparse approximation. Previous studies of OMP have considered the recovery of a sparse signal through Φ and y = Φx + b, where is a matrix with more columns than rows and denotes the measurement noise. In this paper, based on Restricted Isometry Property (RIP), the performance of OMP is analyzed under general perturbations, which means both y and Φ are perturbed. Though the exact recovery of an almost sparse signal x is no longer feasible, the main contribution reveals that the support set of the best k-term approximation of x can be recovered under reasonable conditions. The error bound between x and the estimation of OMP is also derived. By constructing an example it is also demonstrated that the sufficient conditions for support recovery of the best k-term approximation of are rather tight. When x is strong-decaying, it is proved that the sufficient conditions for support recovery of the best k-term approximation of x can be relaxed, and the support can even be recovered in the order of the entries' magnitude. Our results are also compared in detail with some related previous ones.
|
The original signal to be recovered in @cite_6 is non-sparse, and the inputs @math and @math are assumed non-perturbed. Thus the result actually gives an upper bound on the error between @math and @math for ( @math ) process. Set @math , and this result can be rewritten as In Corollary 2, the result is
|
{
"cite_N": [
"@cite_6"
],
"mid": [
"2057380028"
],
"abstract": [
"In this paper we investigate the efficiency of the Orthogonal Matching Pursuit algorithm (OMP) for random dictionaries. We concentrate on dictionaries satisfying the Restricted Isometry Property. We also introduce a stronger Homogenous Restricted Isometry Property which we show is satisfied with overwhelming probability for random dictionaries used in compressed sensing. For these dictionaries we obtain upper estimates for the error of approximation by OMP in terms of the error of the best n-term approximation (Lebesgue-type inequalities). We also present and discuss some open problems about OMP. This is a development of recent results obtained by D.L. Donoho, M. Elad and V.N. Temlyakov."
]
}
|
1106.3373
|
2004526834
|
Orthogonal Matching Pursuit (OMP) is a canonical greedy pursuit algorithm for sparse approximation. Previous studies of OMP have considered the recovery of a sparse signal through Φ and y = Φx + b, where is a matrix with more columns than rows and denotes the measurement noise. In this paper, based on Restricted Isometry Property (RIP), the performance of OMP is analyzed under general perturbations, which means both y and Φ are perturbed. Though the exact recovery of an almost sparse signal x is no longer feasible, the main contribution reveals that the support set of the best k-term approximation of x can be recovered under reasonable conditions. The error bound between x and the estimation of OMP is also derived. By constructing an example it is also demonstrated that the sufficient conditions for support recovery of the best k-term approximation of are rather tight. When x is strong-decaying, it is proved that the sufficient conditions for support recovery of the best k-term approximation of x can be relaxed, and the support can even be recovered in the order of the entries' magnitude. Our results are also compared in detail with some related previous ones.
|
Before comparison, it is worth mentioning that there are fundamental differences between the above two conclusions. First, conditions that guarantee the support set recovery of the best @math -term approximation of @math is the main concern in Corollary 2, and based on the successful support recovery, an upper bound on the error is estimated. In the reference, however, the @math error is directly given regardless of the support recovery. Sometimes, recovering the support set other than the more accurate estimation is a fundamental concern. Second, compared with @cite_6 , this paper has an apparent limitation: the non-sparse signal considered in this paper is almost sparse, whereas the one in @cite_6 is arbitrary.
|
{
"cite_N": [
"@cite_6"
],
"mid": [
"2057380028"
],
"abstract": [
"In this paper we investigate the efficiency of the Orthogonal Matching Pursuit algorithm (OMP) for random dictionaries. We concentrate on dictionaries satisfying the Restricted Isometry Property. We also introduce a stronger Homogenous Restricted Isometry Property which we show is satisfied with overwhelming probability for random dictionaries used in compressed sensing. For these dictionaries we obtain upper estimates for the error of approximation by OMP in terms of the error of the best n-term approximation (Lebesgue-type inequalities). We also present and discuss some open problems about OMP. This is a development of recent results obtained by D.L. Donoho, M. Elad and V.N. Temlyakov."
]
}
|
1106.3373
|
2004526834
|
Orthogonal Matching Pursuit (OMP) is a canonical greedy pursuit algorithm for sparse approximation. Previous studies of OMP have considered the recovery of a sparse signal through Φ and y = Φx + b, where is a matrix with more columns than rows and denotes the measurement noise. In this paper, based on Restricted Isometry Property (RIP), the performance of OMP is analyzed under general perturbations, which means both y and Φ are perturbed. Though the exact recovery of an almost sparse signal x is no longer feasible, the main contribution reveals that the support set of the best k-term approximation of x can be recovered under reasonable conditions. The error bound between x and the estimation of OMP is also derived. By constructing an example it is also demonstrated that the sufficient conditions for support recovery of the best k-term approximation of are rather tight. When x is strong-decaying, it is proved that the sufficient conditions for support recovery of the best k-term approximation of x can be relaxed, and the support can even be recovered in the order of the entries' magnitude. Our results are also compared in detail with some related previous ones.
|
Despite the differences, a tentative comparison of their recovery error estimations is given as follows. Notice that it is really hard to demonstrate which result is better, since the result in @cite_6 involves @math which does not appear in our work. However, a condition with @math involved is given under which ) is at least as good as ). From ) one has If from ) one has Compared with ), ) actually gives a tighter bound.
|
{
"cite_N": [
"@cite_6"
],
"mid": [
"2057380028"
],
"abstract": [
"In this paper we investigate the efficiency of the Orthogonal Matching Pursuit algorithm (OMP) for random dictionaries. We concentrate on dictionaries satisfying the Restricted Isometry Property. We also introduce a stronger Homogenous Restricted Isometry Property which we show is satisfied with overwhelming probability for random dictionaries used in compressed sensing. For these dictionaries we obtain upper estimates for the error of approximation by OMP in terms of the error of the best n-term approximation (Lebesgue-type inequalities). We also present and discuss some open problems about OMP. This is a development of recent results obtained by D.L. Donoho, M. Elad and V.N. Temlyakov."
]
}
|
1106.3373
|
2004526834
|
Orthogonal Matching Pursuit (OMP) is a canonical greedy pursuit algorithm for sparse approximation. Previous studies of OMP have considered the recovery of a sparse signal through Φ and y = Φx + b, where is a matrix with more columns than rows and denotes the measurement noise. In this paper, based on Restricted Isometry Property (RIP), the performance of OMP is analyzed under general perturbations, which means both y and Φ are perturbed. Though the exact recovery of an almost sparse signal x is no longer feasible, the main contribution reveals that the support set of the best k-term approximation of x can be recovered under reasonable conditions. The error bound between x and the estimation of OMP is also derived. By constructing an example it is also demonstrated that the sufficient conditions for support recovery of the best k-term approximation of are rather tight. When x is strong-decaying, it is proved that the sufficient conditions for support recovery of the best k-term approximation of x can be relaxed, and the support can even be recovered in the order of the entries' magnitude. Our results are also compared in detail with some related previous ones.
|
For @math process with @math -sparse signal @math , Davenport and Wakin proved in @cite_35 that if @math satisfies the RIP of order @math with @math and then OMP will recover @math sequentially from @math and @math in @math iterations @cite_35 .
|
{
"cite_N": [
"@cite_35"
],
"mid": [
"2134474909"
],
"abstract": [
"Orthogonal matching pursuit (OMP) is the canonical greedy algorithm for sparse approximation. In this paper we demonstrate that the restricted isometry property (RIP) can be used for a very straightforward analysis of OMP. Our main conclusion is that the RIP of order K+1 (with isometry constant δ <; [ 1 ( 3√K)]) is sufficient for OMP to exactly recover any K-sparse signal. The analysis relies on simple and intuitive observations about OMP and matrices which satisfy the RIP. For restricted classes of K-sparse signals (those that are highly compressible), a relaxed bound on the isometry constant is also established. A deeper understanding of OMP may benefit the analysis of greedy algorithms in general. To demonstrate this, we also briefly revisit the analysis of the regularized OMP (ROMP) algorithm."
]
}
|
1106.3373
|
2004526834
|
Orthogonal Matching Pursuit (OMP) is a canonical greedy pursuit algorithm for sparse approximation. Previous studies of OMP have considered the recovery of a sparse signal through Φ and y = Φx + b, where is a matrix with more columns than rows and denotes the measurement noise. In this paper, based on Restricted Isometry Property (RIP), the performance of OMP is analyzed under general perturbations, which means both y and Φ are perturbed. Though the exact recovery of an almost sparse signal x is no longer feasible, the main contribution reveals that the support set of the best k-term approximation of x can be recovered under reasonable conditions. The error bound between x and the estimation of OMP is also derived. By constructing an example it is also demonstrated that the sufficient conditions for support recovery of the best k-term approximation of are rather tight. When x is strong-decaying, it is proved that the sufficient conditions for support recovery of the best k-term approximation of x can be relaxed, and the support can even be recovered in the order of the entries' magnitude. Our results are also compared in detail with some related previous ones.
|
Corollary 4 is derived from Theorem 4. For @math , one has @math . Thus, it can be seen from ) and ) that Corollary 4 is at least as good as the conclusion in @cite_35 when @math is greater than @math (i.e. @math ), and the latter one is better otherwise.
|
{
"cite_N": [
"@cite_35"
],
"mid": [
"2134474909"
],
"abstract": [
"Orthogonal matching pursuit (OMP) is the canonical greedy algorithm for sparse approximation. In this paper we demonstrate that the restricted isometry property (RIP) can be used for a very straightforward analysis of OMP. Our main conclusion is that the RIP of order K+1 (with isometry constant δ <; [ 1 ( 3√K)]) is sufficient for OMP to exactly recover any K-sparse signal. The analysis relies on simple and intuitive observations about OMP and matrices which satisfy the RIP. For restricted classes of K-sparse signals (those that are highly compressible), a relaxed bound on the isometry constant is also established. A deeper understanding of OMP may benefit the analysis of greedy algorithms in general. To demonstrate this, we also briefly revisit the analysis of the regularized OMP (ROMP) algorithm."
]
}
|
1106.3457
|
2949282174
|
We propose a purely extensional semantics for higher-order logic programming. In this semantics program predicates denote sets of ordered tuples, and two predicates are equal iff they are equal as sets. Moreover, every program has a unique minimum Herbrand model which is the greatest lower bound of all Herbrand models of the program and the least fixed-point of an immediate consequence operator. We also propose an SLD-resolution proof procedure which is proven sound and complete with respect to the minimum model semantics. In other words, we provide a purely extensional theoretical framework for higher-order logic programming which generalizes the familiar theory of classical (first-order) logic programming.
|
The above program behavior can be best explained by the following comment from @cite_6 : in HiLog predicates and other higher-order syntactic objects are not equal unless they (ie., their names) are equated explicitly''.
|
{
"cite_N": [
"@cite_6"
],
"mid": [
"1979966822"
],
"abstract": [
"Abstract We describe a novel logic, called HiLog, and show that it provides a more suitable basis for logic programming than does traditional predicate logic. HiLog has a higher-order syntax and allows arbitrary terms to appear in places where predicates, functions, and atomic formulas occur in predicate calculus. But its semantics is first-order and admits a sound and complete proof procedure. Applications of HiLog are discussed, including DCG grammars, higher-order and modular logic programming, and deductive databases."
]
}
|
1106.2436
|
2143140272
|
We consider an adversarial online learning setting where a decision maker can choose an action in every stage of the game. In addition to observing the reward of the chosen action, the decision maker gets side observations on the reward he would have obtained had he chosen some of the other actions. The observation structure is encoded as a graph, where node i is linked to node j if sampling i provides information on the reward of j. This setting naturally interpolates between the well-known "experts" setting, where the decision maker can view all rewards, and the multi-armed bandits setting, where the decision maker can only view the reward of the chosen action. We develop practical algorithms with provable regret guarantees, which depend on non-trivial graph-theoretic properties of the information feedback structure. We also provide partially-matching lower bounds.
|
The standard multi-armed bandits problem assumes no relationship between the actions. Quite a few papers studied alternative models, where the actions are endowed with a richer structure. However, in the large majority of such papers, the feedback structure is the same as in the standard multi-armed bandits. Examples include @cite_9 , where the actions' rewards are assumed to be drawn from a statistical distribution, with correlations between the actions; and @cite_0 @cite_10 , where the actions reward's are assumed to satisfy some Lipschitz continuity property with respect to a distance measure between the actions.
|
{
"cite_N": [
"@cite_0",
"@cite_9",
"@cite_10"
],
"mid": [
"2010189695",
"2280135673",
"2115519224"
],
"abstract": [
"In this paper we consider the multiarmed bandit problem where the arms are chosen from a subset of the real line and the mean rewards are assumed to be a continuous function of the arms. The problem with an infinite number of arms is much more difficult than the usual one with a finite number of arms because the built-in learning task is now infinite dimensional. We devise a kernel estimator-based learning scheme for the mean reward as a function of the arms. Using this learning scheme, we construct a class of certainty equivalence control with forcing schemes and derive asymptotic upper bounds on their learning loss. To the best of our knowledge, these bounds are the strongest rates yet available. Moreover, they are stronger than the @math required for optimality with respect to the average-cost-per-unit-time criterion.",
"",
"In a multi-armed bandit problem, an online algorithm chooses from a set of strategies in a sequence of @math trials so as to maximize the total payoff of the chosen strategies. While the performance of bandit algorithms with a small finite strategy set is quite well understood, bandit problems with large strategy sets are still a topic of very active investigation, motivated by practical applications such as online auctions and web advertisement. The goal of such research is to identify broad and natural classes of strategy sets and payoff functions which enable the design of efficient solutions. In this work we study a very general setting for the multi-armed bandit problem in which the strategies form a metric space, and the payoff function satisfies a Lipschitz condition with respect to the metric. We refer to this problem as the \"Lipschitz MAB problem\". We present a complete solution for the multi-armed problem in this setting. That is, for every metric space (L,X) we define an isometry invariant Max Min COV(X) which bounds from below the performance of Lipschitz MAB algorithms for @math , and we present an algorithm which comes arbitrarily close to meeting this bound. Furthermore, our technique gives even better results for benign payoff functions."
]
}
|
1106.2436
|
2143140272
|
We consider an adversarial online learning setting where a decision maker can choose an action in every stage of the game. In addition to observing the reward of the chosen action, the decision maker gets side observations on the reward he would have obtained had he chosen some of the other actions. The observation structure is encoded as a graph, where node i is linked to node j if sampling i provides information on the reward of j. This setting naturally interpolates between the well-known "experts" setting, where the decision maker can view all rewards, and the multi-armed bandits setting, where the decision maker can only view the reward of the chosen action. We develop practical algorithms with provable regret guarantees, which depend on non-trivial graph-theoretic properties of the information feedback structure. We also provide partially-matching lower bounds.
|
Our work is also somewhat related to the contextual bandit problem (e.g., @cite_4 @cite_3 ), where the standard multi-armed bandits setting is augmented with some side-information provided in each round, which can be used to determine which action to pick. While we also consider additional side-information, it is in a more specific sense. Moreover, our goal is still to compete against the best single action, rather than some set of policies which use this side-information.
|
{
"cite_N": [
"@cite_4",
"@cite_3"
],
"mid": [
"2119850747",
"2112420033"
],
"abstract": [
"We present Epoch-Greedy, an algorithm for multi-armed bandits with observable side information. Epoch-Greedy has the following properties: No knowledge of a time horizon @math is necessary. The regret incurred by Epoch-Greedy is controlled by a sample complexity bound for a hypothesis class. The regret scales as @math or better (sometimes, much better). Here @math is the complexity term in a sample complexity bound for standard supervised learning.",
"Personalized web services strive to adapt their services (advertisements, news articles, etc.) to individual users by making use of both content and user information. Despite a few recent advances, this problem remains challenging for at least two reasons. First, web service is featured with dynamically changing pools of content, rendering traditional collaborative filtering methods inapplicable. Second, the scale of most web services of practical interest calls for solutions that are both fast in learning and computation. In this work, we model personalized recommendation of news articles as a contextual bandit problem, a principled approach in which a learning algorithm sequentially selects articles to serve users based on contextual information about the users and articles, while simultaneously adapting its article-selection strategy based on user-click feedback to maximize total user clicks. The contributions of this work are three-fold. First, we propose a new, general contextual bandit algorithm that is computationally efficient and well motivated from learning theory. Second, we argue that any bandit algorithm can be reliably evaluated offline using previously recorded random traffic. Finally, using this offline evaluation method, we successfully applied our new algorithm to a Yahoo! Front Page Today Module dataset containing over 33 million events. Results showed a 12.5 click lift compared to a standard context-free bandit algorithm, and the advantage becomes even greater when data gets more scarce."
]
}
|
1106.1887
|
2951834861
|
This paper considers the problem of learning, from samples, the dependency structure of a system of linear stochastic differential equations, when some of the variables are latent. In particular, we observe the time evolution of some variables, and never observe other variables; from this, we would like to find the dependency structure between the observed variables - separating out the spurious interactions caused by the (marginalizing out of the) latent variables' time series. We develop a new method, based on convex optimization, to do so in the case when the number of latent variables is smaller than the number of observed ones. For the case when the dependency structure between the observed variables is sparse, we theoretically establish a high-dimensional scaling result for structure recovery. We verify our theoretical result with both synthetic and real data (from the stock market).
|
Sparse plus Low-Rank Matrix Decomposition: Our results are based on the possibility of separating a low-rank matrix from a sparse one, given their sum (either the entire matrix, or randomly sub-sampled elements thereof) -- see @cite_16 @cite_41 @cite_12 @cite_25 @cite_39 for some recent results, as well as its applications in graph clustering @cite_27 @cite_0 , collaborative filtering @cite_9 , image coding @cite_14 , etc. Our setting is different because we observe correlated linear functions of the sum matrix, and furthermore these linear functions are generated by the stochastic linear dynamical system described by the matrix itself. Another difference is that several of these papers focus on recovery of the low-rank component, while we focus on the sparse one. These two objectives have a very different high-dimensional scaling in our linear observation setting.
|
{
"cite_N": [
"@cite_14",
"@cite_41",
"@cite_9",
"@cite_39",
"@cite_0",
"@cite_27",
"@cite_16",
"@cite_25",
"@cite_12"
],
"mid": [
"2163318306",
"2951443864",
"",
"2952716509",
"1871922518",
"2952000622",
"2003753589",
"2950161005",
"2154098284"
],
"abstract": [
"We introduce an algorithm for a non-negative 3D tensor factorization for the purpose of establishing a local parts feature decomposition from an object class of images. In the past, such a decomposition was obtained using non-negative matrix factorization (NMF) where images were vectorized before being factored by NMF. A tensor factorization (NTF) on the other hand preserves the 2D representations of images and provides a unique factorization (unlike NMF which is not unique). The resulting \"factors\" from the NTF factorization are both sparse (like with NMF) but also separable allowing efficient convolution with the test image. Results show a superior decomposition to what an NMF can provide on all fronts - degree of sparsity, lack of ghost residue due to invariant parts and efficiency of coding of around an order of magnitude better. Experiments on using the local parts decomposition for face detection using SVM and Adaboost classifiers demonstrate that the recovered features are discriminatory and highly effective for classification.",
"This paper is about a curious phenomenon. Suppose we have a data matrix, which is the superposition of a low-rank component and a sparse component. Can we recover each component individually? We prove that under some suitable assumptions, it is possible to recover both the low-rank and the sparse components exactly by solving a very convenient convex program called Principal Component Pursuit; among all feasible decompositions, simply minimize a weighted combination of the nuclear norm and of the L1 norm. This suggests the possibility of a principled approach to robust principal component analysis since our methodology and results assert that one can recover the principal components of a data matrix even though a positive fraction of its entries are arbitrarily corrupted. This extends to the situation where a fraction of the entries are missing as well. We discuss an algorithm for solving this optimization problem, and present applications in the area of video surveillance, where our methodology allows for the detection of objects in a cluttered background, and in the area of face recognition, where it offers a principled way of removing shadows and specularities in images of faces.",
"",
"On the heels of compressed sensing, a remarkable new field has very recently emerged. This field addresses a broad range of problems of significant practical interest, namely, the recovery of a data matrix from what appears to be incomplete, and perhaps even corrupted, information. In its simplest form, the problem is to recover a matrix from a small sample of its entries, and comes up in many areas of science and engineering including collaborative filtering, machine learning, control, remote sensing, and computer vision to name a few. This paper surveys the novel literature on matrix completion, which shows that under some suitable conditions, one can recover an unknown low-rank matrix from a nearly minimal set of entries by solving a simple convex optimization problem, namely, nuclear-norm minimization subject to data constraints. Further, this paper introduces novel results showing that matrix completion is provably accurate even when the few observed entries are corrupted with a small amount of noise. A typical result is that one can recover an unknown n x n matrix of low rank r from just about nr log^2 n noisy samples with an error which is proportional to the noise level. We present numerical results which complement our quantitative analysis and show that, in practice, nuclear norm minimization accurately fills in the many missing entries of large low-rank matrices from just a few noisy samples. Some analogies between matrix completion and compressed sensing are discussed throughout.",
"We suggest using the max-norm as a convex surrogate constraint for clustering. We show how this yields a better exact cluster recovery guarantee than previously suggested nuclear-norm relaxation, and study the effectiveness of our method, and other related convex relaxations, compared to other clustering approaches.",
"This paper considers the problem of clustering a partially observed unweighted graph---i.e., one where for some node pairs we know there is an edge between them, for some others we know there is no edge, and for the remaining we do not know whether or not there is an edge. We want to organize the nodes into disjoint clusters so that there is relatively dense (observed) connectivity within clusters, and sparse across clusters. We take a novel yet natural approach to this problem, by focusing on finding the clustering that minimizes the number of \"disagreements\"---i.e., the sum of the number of (observed) missing edges within clusters, and (observed) present edges across clusters. Our algorithm uses convex optimization; its basis is a reduction of disagreement minimization to the problem of recovering an (unknown) low-rank matrix and an (unknown) sparse matrix from their partially observed sum. We evaluate the performance of our algorithm on the classical Planted Partition Stochastic Block Model. Our main theorem provides sufficient conditions for the success of our algorithm as a function of the minimum cluster size, edge density and observation probability; in particular, the results characterize the tradeoff between the observation probability and the edge density gap. When there are a constant number of clusters of equal size, our results are optimal up to logarithmic factors.",
"Suppose we are given a matrix that is formed by adding an unknown sparse matrix to an unknown low-rank matrix. Our goal is to decompose the given matrix into its sparse and low-rank components. Such a problem arises in a number of applications in model and system identification and is intractable to solve in general. In this paper we consider a convex optimization formulation to splitting the specified matrix into its components by minimizing a linear combination of the l1 norm and the nuclear norm of the components. We develop a notion of rank-sparsity incoherence, expressed as an uncertainty principle between the sparsity pat- tern of a matrix and its row and column spaces, and we use it to characterize both fundamental identifiability as well as (deterministic) sufficient conditions for exact recovery. Our analysis is geometric in nature with the tangent spaces to the algebraic varieties of sparse and low-rank matrices playing a prominent role. When the sparse and low-rank matrices are drawn from certain natural random ensembles, we show that the sufficient conditions for exact recovery are satisfied with high probability. We conclude with simulation results on synthetic matrix decomposition problems.",
"In this paper, we study the problem of recovering a low-rank matrix (the principal components) from a high-dimensional data matrix despite both small entry-wise noise and gross sparse errors. Recently, it has been shown that a convex program, named Principal Component Pursuit (PCP), can recover the low-rank matrix when the data matrix is corrupted by gross sparse errors. We further prove that the solution to a related convex program (a relaxed PCP) gives an estimate of the low-rank matrix that is simultaneously stable to small entrywise noise and robust to gross sparse errors. More precisely, our result shows that the proposed convex program recovers the low-rank matrix even though a positive fraction of its entries are arbitrarily corrupted, with an error bound proportional to the noise level. We present simulation results to support our result and demonstrate that the new convex program accurately recovers the principal components (the low-rank matrix) under quite broad conditions. To our knowledge, this is the first result that shows the classical Principal Component Analysis (PCA), optimal for small i.i.d. noise, can be made robust to gross sparse errors; or the first that shows the newly proposed PCP can be made stable to small entry-wise perturbations.",
"This paper considers the recovery of a low-rank matrix from an observed version that simultaneously contains both (a) erasures: most entries are not observed, and (b) errors: values at a constant fraction of (unknown) locations are arbitrarily corrupted. We provide a new unified performance guarantee on when the natural convex relaxation of minimizing rank plus support succeeds in exact recovery. Our result allows for the simultaneous presence of random and deterministic components in both the error and erasure patterns. On the one hand, corollaries obtained by specializing this one single result in different ways recover (up to poly-log factors) all the existing works in matrix completion, and sparse and low-rank matrix recovery. On the other hand, our results also provide the first guarantees for (a) recovery when we observe a vanishing fraction of entries of a corrupted matrix, and (b) deterministic matrix completion."
]
}
|
1106.1887
|
2951834861
|
This paper considers the problem of learning, from samples, the dependency structure of a system of linear stochastic differential equations, when some of the variables are latent. In particular, we observe the time evolution of some variables, and never observe other variables; from this, we would like to find the dependency structure between the observed variables - separating out the spurious interactions caused by the (marginalizing out of the) latent variables' time series. We develop a new method, based on convex optimization, to do so in the case when the number of latent variables is smaller than the number of observed ones. For the case when the dependency structure between the observed variables is sparse, we theoretically establish a high-dimensional scaling result for structure recovery. We verify our theoretical result with both synthetic and real data (from the stock market).
|
Time-series Forecasting: Motivated by finance applications, time-series forecasting has got a lot of attention during the past three decades @cite_4 . In the model based approaches, it is assumed that the time-series evolves according to some statistical model such as linear regression model @cite_44 , transfer function model @cite_15 , vector autoregressive model @cite_22 , etc. In each case, researchers have developed different methods to learn the parameters of the model for the purpose of forecasting. In this paper, we focus on linear stochastic dynamical systems that are an instance of vector autoregressive models. Previous work toward estimating this model parameters include ad-hoc use of neural network @cite_38 or support vector machine method @cite_19 , all without providing theoretical guarantees on the performance of the algorithm. Our work is different from these results because although our method provides better prediction comparing to similar algorithm, our main focus is sparse model selection not prediction. Perhaps, once a sparse model is selected, one can study the prediction quality as a separate subject.
|
{
"cite_N": [
"@cite_38",
"@cite_4",
"@cite_22",
"@cite_44",
"@cite_19",
"@cite_15"
],
"mid": [
"1527028123",
"",
"2126231666",
"2045823474",
"2012079387",
"2114001875"
],
"abstract": [
"",
"",
"1. Overview. 2. Fundamental Concepts. 3. Stationary Time Series Models. 4. Non-Stationary Time Series Models. 5. Forecasting. 6. Model Identification. 7. Parameter Estimation, Diagnostic Checking, and Model Selection. 8. Seasonal Time Series Models. 9. Intervention Analysis and Outlier Detection. 10. Fourier Analysis. 11. Spectral Theory of Stationary Processes. 12. Estimation of the Spectrum. 13. Transfer Function Models. 14. Vector Time Series Models. 15. State Space Models and the Kalman Filter. 16. Aggregation and Systematic Sampling in Time Series. 17. References. 18. Appendix.",
"An introduction to forecasting Basic statistical concepts Forecasting by using regression analysis Simple linear regression Multiple regression Topics in regression analysis Forecasting by using time series regression, decomposition methods and exponential smoothing Time series regression Decomposition methods Exponential smoothing Forecasting by using basic techniques of the box Jenkins methodology Nonseasonal, box-Jenkins models and their tentative identification Estimation, diagnostic checking and forecasting for nonseasonal box-Jenkins models An introduction to box-Jenkins seasonal modelling Forecasting by using advanced technology of the box-Jenkins methodology General box-Jenkins seasonal modelling Using the box-Jenkins methodology to improve time series regression models and to implement exponential smoothing Transfer functions and intervention models.",
"Abstract Support vector machines (SVMs) are promising methods for the prediction of financial time-series because they use a risk function consisting of the empirical error and a regularized term which is derived from the structural risk minimization principle. This study applies SVM to predicting the stock price index. In addition, this study examines the feasibility of applying SVM in financial forecasting by comparing it with back-propagation neural networks and case-based reasoning. The experimental results show that SVM provides a promising alternative to stock market prediction.",
"From the Publisher: This is a complete revision of a classic, seminal, and authoritative book that has been the model for most books on the topic written since 1970. It focuses on practical techniques throughout, rather than a rigorous mathematical treatment of the subject. It explores the building of stochastic (statistical) models for time series and their use in important areas of application forecasting, model specification, estimation, and checking, transfer function modeling of dynamic relationships, modeling the effects of intervention events, and process control. Features sections on: recently developed methods for model specification, such as canonical correlation analysis and the use of model selection criteria; results on testing for unit root nonstationarity in ARIMA processes; the state space representation of ARMA models and its use for likelihood estimation and forecasting; score test for model checking; and deterministic components and structural components in time series models and their estimation based on regression-time series model methods."
]
}
|
1106.1925
|
1665115054
|
It is of increasing importance to develop learning methods for ranking. In contrast to many learning objectives, however, the ranking problem presents difficulties due to the fact that the space of permutations is not smooth. In this paper, we examine the class of rank-linear objective functions, which includes popular metrics such as precision and discounted cumulative gain. In particular, we observe that expectations of these gains are completely characterized by the marginals of the corresponding distribution over permutation matrices. Thus, the expectations of rank-linear objectives can always be described through locations in the Birkhoff polytope, i.e., doubly-stochastic matrices (DSMs). We propose a technique for learning DSM-based ranking functions using an iterative projection operator known as Sinkhorn normalization. Gradients of this operator can be computed via backpropagation, resulting in an algorithm we call Sinkhorn propagation, or SinkProp. This approach can be combined with a wide range of gradient-based approaches to rank learning. We demonstrate the utility of SinkProp on several information retrieval data sets.
|
builds on a rapidly expanding set of approaches to rank learning. Early learning-to-rank methods employed surrogate gain functions, as approximations to the target evaluation measure were necessary due to its aforementioned non-differentiability. More recently, methods have been developed to optimize expectation of the target evaluation measure, including SoftRank @cite_17 . BoltzRank @cite_14 and SmoothRank @cite_10 . These methods all attempt to maximize the expected gain of the ranking under any of the gain functions described above. The crucial component of each is the estimate of the distribution over rankings: SoftRank uses a rank-binomial approximation, which entails sampling and sorting ranks; BoltzRank uses a fixed set of sampled ranks; and SmoothRank uses a softmax on rankings based on a noisy model of scores. can be viewed as another method to optimize expected ranking gain, but the effect of the scaling is to concentrate the mass of the distribution over ranks on a small set, which peaks on the single chosen rank selected by the model at test time.
|
{
"cite_N": [
"@cite_14",
"@cite_10",
"@cite_17"
],
"mid": [
"2159545104",
"2051928158",
"2059001985"
],
"abstract": [
"Ranking a set of retrieved documents according to their relevance to a query is a popular problem in information retrieval. Methods that learn ranking functions are difficult to optimize, as ranking performance is typically judged by metrics that are not smooth. In this paper we propose a new listwise approach to learning to rank. Our method creates a conditional probability distribution over rankings assigned to documents for a given query, which permits gradient ascent optimization of the expected value of some performance measure. The rank probabilities take the form of a Boltzmann distribution, based on an energy function that depends on a scoring function composed of individual and pairwise potentials. Including pairwise potentials is a novel contribution, allowing the model to encode regularities in the relative scores of documents; existing models assign scores at test time based only on individual documents, with no pairwise constraints between documents. Experimental results on the LETOR3.0 data set show that our method out-performs existing learning approaches to ranking.",
"Most ranking algorithms are based on the optimization of some loss functions, such as the pairwise loss. However, these loss functions are often different from the criteria that are adopted to measure the quality of the web page ranking results. To overcome this problem, we propose an algorithm which aims at directly optimizing popular measures such as the Normalized Discounted Cumulative Gain and the Average Precision. The basic idea is to minimize a smooth approximation of these measures with gradient descent. Crucial to this kind of approach is the choice of the smoothing factor. We provide various theoretical analysis on that choice and propose an annealing algorithm to iteratively minimize a less and less smoothed approximation of the measure of interest. Results on the Letor benchmark datasets show that the proposed algorithm achieves state-of-the-art performances.",
"We address the problem of learning large complex ranking functions. Most IR applications use evaluation metrics that depend only upon the ranks of documents. However, most ranking functions generate document scores, which are sorted to produce a ranking. Hence IR metrics are innately non-smooth with respect to the scores, due to the sort. Unfortunately, many machine learning algorithms require the gradient of a training objective in order to perform the optimization of the model parameters,and because IR metrics are non-smooth,we need to find a smooth proxy objective that can be used for training. We present a new family of training objectives that are derived from the rank distributions of documents, induced by smoothed scores. We call this approach SoftRank. We focus on a smoothed approximation to Normalized Discounted Cumulative Gain (NDCG), called SoftNDCG and we compare it with three other training objectives in the recent literature. We present two main results. First, SoftRank yields a very good way of optimizing NDCG. Second, we show that it is possible to achieve state of the art test set NDCG results by optimizing a soft NDCG objective on the training set with a different discount function"
]
}
|
1106.1925
|
1665115054
|
It is of increasing importance to develop learning methods for ranking. In contrast to many learning objectives, however, the ranking problem presents difficulties due to the fact that the space of permutations is not smooth. In this paper, we examine the class of rank-linear objective functions, which includes popular metrics such as precision and discounted cumulative gain. In particular, we observe that expectations of these gains are completely characterized by the marginals of the corresponding distribution over permutation matrices. Thus, the expectations of rank-linear objectives can always be described through locations in the Birkhoff polytope, i.e., doubly-stochastic matrices (DSMs). We propose a technique for learning DSM-based ranking functions using an iterative projection operator known as Sinkhorn normalization. Gradients of this operator can be computed via backpropagation, resulting in an algorithm we call Sinkhorn propagation, or SinkProp. This approach can be combined with a wide range of gradient-based approaches to rank learning. We demonstrate the utility of SinkProp on several information retrieval data sets.
|
Sinkhorn scaling itself is a long-standing method with a wide variety of applications, including discrete constraint satisfaction problems such as Sudoku @cite_7 and for updating probabilistic belief matrices @cite_9 . It has also been used as a method for finding or approximating the matrix permanent, as the permanent of the scaled matrix is the permanent of original matrix times the entries of the row and column scaling vector @cite_3 . Recently, Sinkhorn normalization has been employed within a regret-minimization approach for on-line learning of permutations @cite_13 .
|
{
"cite_N": [
"@cite_9",
"@cite_3",
"@cite_13",
"@cite_7"
],
"mid": [
"2099811697",
"2042757441",
"2171913181",
"2037552299"
],
"abstract": [
"Updating probabilistic belief matrices as new observations arrive, in the presence of noise, is a critical part of many algorithms for target tracking in sensor networks. These updates have to be carried out while preserving sum constraints, arising for example, from probabilities. This paper addresses the problem of updating belief matrices to satisfy sum constraints using scaling algorithms. We show that the convergence behavior of the Sinkhorn scaling process, used for scaling belief matrices, can vary dramatically depending on whether the prior unscaled matrix is exactly scalable or only almost scalable. We give an efficient polynomial-time algorithm based on the maximum-flow algorithm that determines whether a given matrix is exactly scalable, thus determining the convergence properties of the Sinkhorn scaling process. We prove that the Sinkhorn scaling process always provides a solution to the problem of minimizing the Kullback-Leibler distance of the physically feasible scaled matrix from the prior constraint-violating matrix, even when the matrices are not exactly scalable. We pose the scaling process as a linearly constrained convex optimization problem, and solve it using an interior-point method. We prove that even in cases in which the matrices are not exactly scalable, the problem can be solved to e-optimality in strongly polynomial time, improving the best known bound for the problem of scaling arbitrary nonnegative rectangular matrices to prescribed row and column sums.",
"Approximation of the permanent of a matrix with nonnegative entries is a well studied problem. The most successful approach to date for general matrices uses Markov chains to approximately sample from a distribution on weighted permutations, and Jerrum, Sinclair, and Vigoda developed such a method they proved runs in polynomial time in the input. The current bound on the running time of their method is O(n7(log n)4). Here we present a very different approach using sequential acceptance rejection, and show that for a class of dense problems this method has an O(n4 log n) expected running time.",
"We give an algorithm for the on-line learning of permutations. The algorithm maintains its uncertainty about the target permutation as a doubly stochastic weight matrix, and makes predictions using an efficient method for decomposing the weight matrix into a convex combination of permutations. The weight matrix is updated by multiplying the current matrix entries by exponential factors, and an iterative procedure is needed to restore double stochasticity. Even though the result of this procedure does not have a closed form, a new analysis approach allows us to prove an optimal (up to small constant factors) bound on the regret of our algorithm. This regret bound is significantly better than that of either Kalai and Vempala's more efficient Follow the Perturbed Leader algorithm or the computationally expensive method of explicitly representing each permutation as an expert.",
"The Sudoku puzzle is a discrete constraint satisfaction problem, as is the error correction decoding problem. We propose here an algorithm for solution to the Sinkhorn puzzle based on Sinkhorn balancing. Sinkhorn balancing is an algorithm for projecting a matrix onto the space of doubly stochastic matrices. The Sinkhorn balancing solver is capable of solving all but the most difficult puzzles. A proof of convergence is presented, with some information theoretic connections. A random generalization of the Sudoku puzzle is presented, for which the Sinkhorn-based solver is also very effective."
]
}
|
1106.1925
|
1665115054
|
It is of increasing importance to develop learning methods for ranking. In contrast to many learning objectives, however, the ranking problem presents difficulties due to the fact that the space of permutations is not smooth. In this paper, we examine the class of rank-linear objective functions, which includes popular metrics such as precision and discounted cumulative gain. In particular, we observe that expectations of these gains are completely characterized by the marginals of the corresponding distribution over permutation matrices. Thus, the expectations of rank-linear objectives can always be described through locations in the Birkhoff polytope, i.e., doubly-stochastic matrices (DSMs). We propose a technique for learning DSM-based ranking functions using an iterative projection operator known as Sinkhorn normalization. Gradients of this operator can be computed via backpropagation, resulting in an algorithm we call Sinkhorn propagation, or SinkProp. This approach can be combined with a wide range of gradient-based approaches to rank learning. We demonstrate the utility of SinkProp on several information retrieval data sets.
|
Although this work represents the first approach to ranking that has directly incorporated Sinkhorn normalization into the training procedure, previously-developed methods have also found it useful. The SoftRank algorithm @cite_17 , for example, uses Sinkhorn balancing at test-time, as a post-processing step for the approximated ranking matrix. Unlike the approach proposed here, however, it does not take this step into account when optimizing the objective function. SmoothRank uses half a step of Sinkhorn balancing, normalizing only the scores within each column of the matrix, to produce a distribution over items at a particular rank.
|
{
"cite_N": [
"@cite_17"
],
"mid": [
"2059001985"
],
"abstract": [
"We address the problem of learning large complex ranking functions. Most IR applications use evaluation metrics that depend only upon the ranks of documents. However, most ranking functions generate document scores, which are sorted to produce a ranking. Hence IR metrics are innately non-smooth with respect to the scores, due to the sort. Unfortunately, many machine learning algorithms require the gradient of a training objective in order to perform the optimization of the model parameters,and because IR metrics are non-smooth,we need to find a smooth proxy objective that can be used for training. We present a new family of training objectives that are derived from the rank distributions of documents, induced by smoothed scores. We call this approach SoftRank. We focus on a smoothed approximation to Normalized Discounted Cumulative Gain (NDCG), called SoftNDCG and we compare it with three other training objectives in the recent literature. We present two main results. First, SoftRank yields a very good way of optimizing NDCG. Second, we show that it is possible to achieve state of the art test set NDCG results by optimizing a soft NDCG objective on the training set with a different discount function"
]
}
|
1106.2233
|
2080161383
|
Observational data usually comes with a multimodal nature, which means that it can be naturally represented by a multi-layer graph whose layers share the same set of vertices (objects) with different edges (pairwise relationships). In this paper, we address the problem of combining different layers of the multi-layer graph for an improved clustering of the vertices compared to using layers independently. We propose two novel methods, which are based on a joint matrix factorization and a graph regularization framework respectively, to efficiently combine the spectrum of the multiple graph layers, namely the eigenvectors of the graph Laplacian matrices. In each case, the resulting combination, which we call a “joint spectrum” of multiple layers, is used for clustering the vertices. We evaluate our approaches by experiments with several real world social network datasets. Results demonstrate the superior or competitive performance of the proposed methods compared to state-of-the-art techniques and common baseline methods, such as co-regularization and summation of information from individual graphs.
|
In addition to the general graph-based data processing, there is a unique branch in graph theory that is devoted to analyzing the spectrum of the graphs, the spectral graph theory. The manuscript by Chung @cite_25 gives a good introduction to this field. Among various methods that are developed, we particularly emphasize the so-called spectral clustering algorithm, which has become one of the major graph-based clustering techniques. Due to its promising performance and close links to other well-studied mathematical fields, a large number of variants of the original algorithm has been proposed, such as the constrained spectral clustering algorithm @cite_8 @cite_24 @cite_42 @cite_32 @cite_2 . In general, these works have suggested different ways to incorporate constraints in the clustering task. Among them, @cite_42 has proposed a regularization framework in the graph spectral domain, which provides the closest methodology to our work.
|
{
"cite_N": [
"@cite_8",
"@cite_42",
"@cite_32",
"@cite_24",
"@cite_2",
"@cite_25"
],
"mid": [
"",
"2141902614",
"2149908819",
"2090668741",
"2088857627",
"1578099820"
],
"abstract": [
"",
"We propose a novel framework for constrained spectral clustering with pairwise constraints which specify whether two objects belong to the same cluster or not. Unlike previous methods that modify the similarity matrix with pairwise constraints, we adapt the spectral embedding towards an ideal embedding as consistent with the pairwise constraints as possible. Our formulation leads to a small semidefinite program whose complexity is independent of the number of objects in the data set and the number of pairwise constraints, making it scalable to large-scale problems. The proposed approach is applicable directly to multi-class problems, handles both must-link and cannot-link constraints, and can effectively propagate pairwise constraints. Extensive experiments on real image data and UCI data have demonstrated the efficacy of our algorithm.",
"Clustering performance can often be greatly improved by leveraging side information. In this paper, we consider constrained clustering with pairwise constraints, which specify some pairs of objects from the same cluster or not. The main idea is to design a kernel to respect both the proximity structure of the data and the given pairwise constraints. We propose a spectral kernel learning framework and formulate it as a convex quadratic program, which can be optimally solved efficiently. Our framework enjoys several desirable features: 1) it is applicable to multi-class problems; 2) it can handle both must-link and cannot-link constraints; 3) it can propagate pairwise constraints effectively; 4) it is scalable to large-scale problems; and 5) it can handle weighted pairwise constraints. Extensive experiments have demonstrated the superiority of the proposed approach.",
"Pairwise constraints specify whether or not two samples should be in one cluster. Although it has been successful to incorporate them into traditional clustering methods, such as K-means, little progress has been made in combining them with spectral clustering. The major challenge in designing an effective constrained spectral clustering is a sensible combination of the scarce pairwise constraints with the original affinity matrix. We propose to combine the two sources of affinity by propagating the pairwise constraints information over the original affinity matrix. Our method has a Gaussian process interpretation and results in a closed-form expression for the new affinity matrix. Experiments show it outperforms state-of-the-art constrained clustering methods in getting good clusterings with fewer constraints, and yields good image segmentation with user-specified pairwise constraints.",
"Constrained clustering has been well-studied for algorithms like K-means and hierarchical agglomerative clustering. However, how to encode constraints into spectral clustering remains a developing area. In this paper, we propose a flexible and generalized framework for constrained spectral clustering. In contrast to some previous efforts that implicitly encode Must-Link and Cannot-Link constraints by modifying the graph Laplacian or the resultant eigenspace, we present a more natural and principled formulation, which preserves the original graph Laplacian and explicitly encodes the constraints. Our method offers several practical advantages: it can encode the degree of belief (weight) in Must-Link and Cannot-Link constraints; it guarantees to lower-bound how well the given constraints are satisfied using a user-specified threshold; and it can be solved deterministically in polynomial time through generalized eigendecomposition. Furthermore, by inheriting the objective function from spectral clustering and explicitly encoding the constraints, much of the existing analysis of spectral clustering techniques is still valid. Consequently our work can be posed as a natural extension to unconstrained spectral clustering and be interpreted as finding the normalized min-cut of a labeled graph. We validate the effectiveness of our approach by empirical results on real-world data sets, with applications to constrained image segmentation and clustering benchmark data sets with both binary and degree-of-belief constraints.",
"Eigenvalues and the Laplacian of a graph Isoperimetric problems Diameters and eigenvalues Paths, flows, and routing Eigenvalues and quasi-randomness Expanders and explicit constructions Eigenvalues of symmetrical graphs Eigenvalues of subgraphs with boundary conditions Harnack inequalities Heat kernels Sobolev inequalities Advanced techniques for random walks on graphs Bibliography Index."
]
}
|
1106.2275
|
2950804487
|
Recent years have witnessed a slew of coding techniques custom designed for networked storage systems. Network coding inspired regenerating codes are the most prolifically studied among these new age storage centric codes. A lot of effort has been invested in understanding the fundamental achievable trade-offs of storage and bandwidth usage to maintain redundancy in presence of different models of failures, showcasing the efficacy of regenerating codes with respect to traditional erasure coding techniques. For practical usability in open and adversarial environments, as is typical in peer-to-peer systems, we need however not only resilience against erasures, but also from (adversarial) errors. In this paper, we study the resilience of generalized regenerating codes (supporting multi-repairs, using collaboration among newcomers) in the presence of two classes of Byzantine nodes, relatively benign selfish (non-cooperating) nodes, as well as under more active, malicious polluting nodes. We give upper bounds on the resilience capacity of regenerating codes, and show that the advantages of collaborative repair can turn to be detrimental in the presence of Byzantine nodes. We further exhibit that system mechanisms can be combined with regenerating codes to mitigate the effect of rogue nodes.
|
Pollution attacks are mitigated in peer-to-peer content dissemination systems @cite_18 @cite_8 @cite_17 using a combination of proactive strategies such as digital signature provided by the content source or by reactive strategies such as by randomized probing of the content source, leveraging on the causal relationship in the sequence of content to be delivered, as well as by deploying reputation mechanisms. In such settings, the prevention of pollution attacks is furthermore facilitated by a continuous involvement of the content source, which is assumed to be online.
|
{
"cite_N": [
"@cite_18",
"@cite_17",
"@cite_8"
],
"mid": [
"2131770391",
"2087133165",
""
],
"abstract": [
"We study data integrity verification in peer-to-peer media streaming for content distribution. Challenges include the timing constraint of streaming as well as the untrustworthiness of peers. We show the inadequacy of existing data integrity verification protocols, and propose Block-Oriented Probabilistic verification (BOPV), an efficient protocol utilizing message digest and probabilistic verification. We then propose Tree-based Forward Digest Protocol (TFDP) to further reduce the communication overhead. A comprehensive comparison is presented by comparing the performance of existing protocols and our protocols, with respect to overhead, security assurance level, and packet loss tolerance. Finally, experimental results are presented to evaluate the performance of our protocols.",
"RSS (really simple syndication) based feeds have become the defacto mechanism of web based publish subscribe. Peer-to-Peer delivery of such feeds can not only alleviate the load at the content server, but also reduce the dissemination latency. However, due to openness of P2P system, malicious peers can join the network as easily as normal peers do. Such malicious peers may pretend to relay but actually not, and thus deny service, or even disseminate counterfeit updates, rendering a Peer-to-Peer mechanism not only useless, but even harmful (e.g. by false updates). We propose overlay independent randomized strategies to mitigate these ill-effects of malicious peers at a marginal overhead, thus enjoying the benefits of Peer-to-Peer dissemination, along with the assurance of content integrity in RSS like web-based publish-subscribe applications without altering currently deployed server infrastructure. We conduct analysis of performance of our proposal by modeling behavior of the system and validating the same with simulation. Results show that our proposal is effective, robust and scalable.",
""
]
}
|
1106.2275
|
2950804487
|
Recent years have witnessed a slew of coding techniques custom designed for networked storage systems. Network coding inspired regenerating codes are the most prolifically studied among these new age storage centric codes. A lot of effort has been invested in understanding the fundamental achievable trade-offs of storage and bandwidth usage to maintain redundancy in presence of different models of failures, showcasing the efficacy of regenerating codes with respect to traditional erasure coding techniques. For practical usability in open and adversarial environments, as is typical in peer-to-peer systems, we need however not only resilience against erasures, but also from (adversarial) errors. In this paper, we study the resilience of generalized regenerating codes (supporting multi-repairs, using collaboration among newcomers) in the presence of two classes of Byzantine nodes, relatively benign selfish (non-cooperating) nodes, as well as under more active, malicious polluting nodes. We give upper bounds on the resilience capacity of regenerating codes, and show that the advantages of collaborative repair can turn to be detrimental in the presence of Byzantine nodes. We further exhibit that system mechanisms can be combined with regenerating codes to mitigate the effect of rogue nodes.
|
Generally speaking, P2P storage environments are fundamentally different from P2P content distribution networks. The content owner may or not be online all the while. Furthermore, the very premise of regenerating codes is a setting where no one node possesses the whole copy of the object to be stored, i.e., a hybrid storage strategy where one full copy of the data is stored in addition to the encoded blocks, is excluded for other practical considerations. Likewise, different stored objects may be independent of each other. Hence, mechanisms to provide protection against errors as an inherent property of the code (similar to error correcting codes) becomes essential. The presented study looks at the fundamental capacity of such codes under some specific adversarial models. This work is thus complementary to other existing storage systems approaches such as incentive and reputation mechanisms @cite_10 and remote data checking techniques @cite_4 for data outsourced to third parties to name a few. Likewise, Byzantine algorithms have been used in Oceanstore @cite_5 to support reliable data updates. The focus there is on application level support for updating content, rather than storage infrastructure level Byzantine behavior studied in this paper.
|
{
"cite_N": [
"@cite_5",
"@cite_10",
"@cite_4"
],
"mid": [
"2104210894",
"",
"2155250246"
],
"abstract": [
"OceanStore is a utility infrastructure designed to span the globe and provide continuous access to persistent information. Since this infrastructure is comprised of untrusted servers, data is protected through redundancy and cryptographic techniques. To improve performance, data is allowed to be cached anywhere, anytime. Additionally, monitoring of usage patterns allows adaptation to regional outages and denial of service attacks; monitoring also enhances performance through pro-active movement of data. A prototype implementation is currently under development.",
"",
"Remote Data Checking (RDC) is a technique by which clients can establish that data outsourced at untrusted servers remains intact over time. RDC is useful as a prevention tool, allowing clients to periodically check if data has been damaged, and as a repair tool whenever damage has been detected. Initially proposed in the context of a single server, RDC was later extended to verify data integrity in distributed storage systems that rely on replication and on erasure coding to store data redundantly at multiple servers. Recently, a technique was proposed to add redundancy based on network coding, which offers interesting tradeoffs because of its remarkably low communication overhead to repair corrupt servers. Unlike previous work on RDC which focused on minimizing the costs of the prevention phase, we take a holistic look and initiate the investigation of RDC schemes for distributed systems that rely on network coding to minimize the combined costs of both the prevention and repair phases. We propose RDC-NC, a novel secure and efficient RDC scheme for network coding-based distributed storage systems. RDC-NC mitigates new attacks that stem from the underlying principle of network coding. The scheme is able to preserve in an adversarial setting the minimal communication overhead of the repair component achieved by network coding in a benign setting. We implement our scheme and experimentally show that it is computationally inexpensive for both clients and servers."
]
}
|
1106.2473
|
2953269516
|
We investigate how author name homonymy distorts clustered large-scale co-author networks, and present a simple, effective, scalable and generalizable algorithm to ameliorate such distortions. We evaluate the performance of the algorithm to improve the resolution of mesoscopic network structures. To this end, we establish the ground truth for a sample of author names that is statistically representative of different types of nodes in the co-author network, distinguished by their role for the connectivity of the network. We finally observe that this distinction of node roles based on the mesoscopic structure of the network, in combination with a quantification of author name commonality, suggests a new approach to assess network distortion by homonymy and to analyze the reduction of distortion in the network after disambiguation, without requiring ground truth sampling.
|
In supervised learning a smaller set of names is manually disambiguated so that a classification model can be trained. @cite_9 techniques such as naive bayes and support vector machines were employed effectively. The drawback of such methods is that the training set needs to be large enough for the classifier to extrapolate unseen data accurately. This re-introduces the problem of manual disambiguation of large sets of names.
|
{
"cite_N": [
"@cite_9"
],
"mid": [
"2162337786"
],
"abstract": [
"Due to name abbreviations, identical names, name misspellings, and pseudonyms in publications or bibliographies (citations), an author may have multiple names and multiple authors may share the same name. Such name ambiguity affects the performance of document retrieval, Web search, database integration, and may cause improper attribution to authors. We investigate two supervised learning approaches to disambiguate authors in the citations. One approach uses the naive Bayes probability model, a generative model; the other uses support vector machines (SVMs) [V. Vapnik (1995)] and the vector space representation of citations, a discriminative model. Both approaches utilize three types of citation attributes: coauthor names, the title of the paper, and the title of the journal or proceeding. We illustrate these two approaches on two types of data, one collected from the Web, mainly publication lists from homepages, the other collected from the DBLP citation databases."
]
}
|
1106.1622
|
2952715633
|
We address the problem of minimizing a convex function over the space of large matrices with low rank. While this optimization problem is hard in general, we propose an efficient greedy algorithm and derive its formal approximation guarantees. Each iteration of the algorithm involves (approximately) finding the left and right singular vectors corresponding to the largest singular value of a certain matrix, which can be calculated in linear time. This leads to an algorithm which can scale to large matrices arising in several applications such as matrix completion for collaborative filtering and robust low rank matrix approximation.
|
As mentioned earlier, the problem defined in has many applications, and therefore it was studied in various contexts. A popular approach is to use the trace norm as a surrogate for the rank (e.g. @cite_20 ). This approach is closely related to the idea of using the @math norm as a surrogate for sparsity, because low rank corresponds to sparsity of the vector of singular values and the trace norm is the @math norm of the vector of singular values. This approach has been extensively studied, mainly in the context of collaborating filtering. See for example @cite_27 @cite_12 @cite_6 @cite_1 @cite_21 .
|
{
"cite_N": [
"@cite_21",
"@cite_1",
"@cite_6",
"@cite_27",
"@cite_20",
"@cite_12"
],
"mid": [
"",
"2000157792",
"2611328865",
"2951328719",
"1966096622",
""
],
"abstract": [
"",
"Let M be an nα × n matrix of rank r ≪ n, and assume that a uniformly random subset E of its entries is observed. We describe an efficient algorithm that reconstructs M from |E| = O(r n) observed entries with relative root mean square error RMSE ≤ C(α) (nr |E|)1 2. Further, if r = O(1) and M is sufficiently unstructured, then it can be reconstructed exactly from |E| = O(n log n) entries. This settles (in the case of bounded rank) a question left open by Candes and Recht and improves over the guarantees for their reconstruction algorithm. The complexity of our algorithm is O(|E|r log n), which opens the way to its use for massive data sets. In the process of proving these statements, we obtain a generalization of a celebrated result by Friedman-Kahn-Szemeredi and Feige-Ofek on the spectrum of sparse random matrices.",
"We consider a problem of considerable practical interest: the recovery of a data matrix from a sampling of its entries. Suppose that we observe m entries selected uniformly at random from a matrix M. Can we complete the matrix and recover the entries that we have not seen? We show that one can perfectly recover most low-rank matrices from what appears to be an incomplete set of entries. We prove that if the number m of sampled entries obeys @math for some positive numerical constant C, then with very high probability, most n×n matrices of rank r can be perfectly recovered by solving a simple convex optimization program. This program finds the matrix with minimum nuclear norm that fits the data. The condition above assumes that the rank is not too large. However, if one replaces the 1.2 exponent with 1.25, then the result holds for all values of the rank. Similar results hold for arbitrary rectangular matrices as well. Our results are connected with the recent literature on compressed sensing, and show that objects other than signals and images can be perfectly reconstructed from very limited information.",
"This paper introduces a novel algorithm to approximate the matrix with minimum nuclear norm among all matrices obeying a set of convex constraints. This problem may be understood as the convex relaxation of a rank minimization problem, and arises in many important applications as in the task of recovering a large matrix from a small subset of its entries (the famous Netflix problem). Off-the-shelf algorithms such as interior point methods are not directly amenable to large problems of this kind with over a million unknown entries. This paper develops a simple first-order and easy-to-implement algorithm that is extremely efficient at addressing problems in which the optimal solution has low rank. The algorithm is iterative and produces a sequence of matrices (X^k, Y^k) and at each step, mainly performs a soft-thresholding operation on the singular values of the matrix Y^k. There are two remarkable features making this attractive for low-rank matrix completion problems. The first is that the soft-thresholding operation is applied to a sparse matrix; the second is that the rank of the iterates X^k is empirically nondecreasing. Both these facts allow the algorithm to make use of very minimal storage space and keep the computational cost of each iteration low. We provide numerical examples in which 1,000 by 1,000 matrices are recovered in less than a minute on a modest desktop computer. We also demonstrate that our approach is amenable to very large scale problems by recovering matrices of rank about 10 with nearly a billion unknowns from just about 0.4 of their sampled entries. Our methods are connected with linearized Bregman iterations for l1 minimization, and we develop a framework in which one can understand these algorithms in terms of well-known Lagrange multiplier algorithms.",
"We describe a generalization of the trace heuristic that applies to general nonsymmetric, even non-square, matrices, and reduces to the trace heuristic when the matrix is positive semidefinite. The heuristic is to replace the (nonconvex) rank objective with the sum of the singular values of the matrix, which is the dual of the spectral norm. We show that this problem can be reduced to a semidefinite program, hence efficiently solved. To motivate the heuristic, we, show that the dual spectral norm is the convex envelope of the rank on the set of matrices with norm less than one. We demonstrate the method on the problem of minimum-order system approximation.",
""
]
}
|
1106.1622
|
2952715633
|
We address the problem of minimizing a convex function over the space of large matrices with low rank. While this optimization problem is hard in general, we propose an efficient greedy algorithm and derive its formal approximation guarantees. Each iteration of the algorithm involves (approximately) finding the left and right singular vectors corresponding to the largest singular value of a certain matrix, which can be calculated in linear time. This leads to an algorithm which can scale to large matrices arising in several applications such as matrix completion for collaborative filtering and robust low rank matrix approximation.
|
In this paper we tackle the rank minimization directly, using a greedy selection approach, without relying on the trace norm as a convex surrogate. Our approach is similar to forward greedy selection approaches for optimization with sparsity constraint (e.g. the MP @cite_10 and OMP @cite_9 algorithms), and in particular we extend the fully corrective forward greedy selection algorithm given in @cite_24 ). We also provide formal guarantees on the competitiveness of our algorithm relative to matrices with small trace norm.
|
{
"cite_N": [
"@cite_24",
"@cite_9",
"@cite_10"
],
"mid": [
"1994520254",
"2128659236",
""
],
"abstract": [
"We study the problem of minimizing the expected loss of a linear predictor while constraining its sparsity, i.e., bounding the number of features used by the predictor. While the resulting optimization problem is generally NP-hard, several approximation algorithms are considered. We analyze the performance of these algorithms, focusing on the characterization of the trade-off between accuracy and sparsity of the learned predictor in different scenarios.",
"We describe a recursive algorithm to compute representations of functions with respect to nonorthogonal and possibly overcomplete dictionaries of elementary building blocks e.g. affine (wavelet) frames. We propose a modification to the matching pursuit algorithm of Mallat and Zhang (1992) that maintains full backward orthogonality of the residual (error) at every step and thereby leads to improved convergence. We refer to this modified algorithm as orthogonal matching pursuit (OMP). It is shown that all additional computation required for the OMP algorithm may be performed recursively. >",
""
]
}
|
1106.1622
|
2952715633
|
We address the problem of minimizing a convex function over the space of large matrices with low rank. While this optimization problem is hard in general, we propose an efficient greedy algorithm and derive its formal approximation guarantees. Each iteration of the algorithm involves (approximately) finding the left and right singular vectors corresponding to the largest singular value of a certain matrix, which can be calculated in linear time. This leads to an algorithm which can scale to large matrices arising in several applications such as matrix completion for collaborative filtering and robust low rank matrix approximation.
|
Recently, @cite_23 proposed the ADMiRA algorithm, which also follows the greedy approach. However, the ADMiRA algorithm is different, as in each step it first chooses @math components and then uses SVD to revert back to a @math rank matrix. This is more expensive then our algorithm which chooses a single rank 1 matrix at each step. The difference between the two algorithms is somewhat similar to the difference between the OMP @cite_9 algorithm for learning sparse vectors, to CoSaMP @cite_25 and SP @cite_16 . In addition, the ADMiRA algorithm is specific to the squared loss while our algorithm can handle any smooth loss. Finally, while ADMiRA comes with elegant performance guarantees, these rely on strong assumptions, e.g. that the matrix defining the quadratic loss satisfies a rank-restricted isometry property. In contrast, our analysis only assumes smoothness of the loss function.
|
{
"cite_N": [
"@cite_9",
"@cite_16",
"@cite_25",
"@cite_23"
],
"mid": [
"2128659236",
"1527917680",
"2289917018",
"2158923808"
],
"abstract": [
"We describe a recursive algorithm to compute representations of functions with respect to nonorthogonal and possibly overcomplete dictionaries of elementary building blocks e.g. affine (wavelet) frames. We propose a modification to the matching pursuit algorithm of Mallat and Zhang (1992) that maintains full backward orthogonality of the residual (error) at every step and thereby leads to improved convergence. We refer to this modified algorithm as orthogonal matching pursuit (OMP). It is shown that all additional computation required for the OMP algorithm may be performed recursively. >",
"Abstract : We propose a new method for reconstruction of sparse signals with and without noisy perturbations, termed the subspace pursuit algorithm. The algorithm has two important characteristics: low computational complexity, comparable to that of orthogonal matching pursuit techniques, and reconstruction accuracy of the same order as that of LP optimization methods. The presented analysis shows that in the noiseless setting, the proposed algorithm can exactly reconstruct arbitrary sparse signals provided that the sensing matrix satisfies the restricted isometry property with a constant parameter. In the noisy setting and in the case that the signal is not exactly sparse it can be shown that the mean squared error of the reconstruction is upper bounded by constant multiples of the measurement and signal perturbation energies.",
"Abstract Compressive sampling offers a new paradigm for acquiring signals that are compressible with respect to an orthonormal basis. The major algorithmic challenge in compressive sampling is to approximate a compressible signal from noisy samples. This paper describes a new iterative recovery algorithm called CoSaMP that delivers the same guarantees as the best optimization-based approaches. Moreover, this algorithm offers rigorous bounds on computational cost and storage. It is likely to be extremely efficient for practical problems because it requires only matrix–vector multiplies with the sampling matrix. For compressible signals, the running time is just O ( N log 2 N ) , where N is the length of the signal.",
"In this paper, we address compressed sensing of a low-rank matrix posing the inverse problem as an approximation problem with a specified target rank of the solution. A simple search over the target rank then provides the minimum rank solution satisfying a prescribed data approximation bound. We propose an atomic decomposition providing an analogy between parsimonious representations of a sparse vector and a low-rank matrix and extending efficient greedy algorithms from the vector to the matrix case. In particular, we propose an efficient and guaranteed algorithm named atomic decomposition for minimum rank approximation (ADMiRA) that extends Needell and Tropp's compressive sampling matching pursuit (CoSaMP) algorithm from the sparse vector to the low-rank matrix case. The performance guarantee is given in terms of the rank-restricted isometry property (R-RIP) and bounds both the number of iterations and the error in the approximate solution for the general case of noisy measurements and approximately low-rank solution. With a sparse measurement operator as in the matrix completion problem, the computation in ADMiRA is linear in the number of measurements. Numerical experiments for the matrix completion problem show that, although the R-RIP is not satisfied in this case, ADMiRA is a competitive algorithm for matrix completion."
]
}
|
1106.1622
|
2952715633
|
We address the problem of minimizing a convex function over the space of large matrices with low rank. While this optimization problem is hard in general, we propose an efficient greedy algorithm and derive its formal approximation guarantees. Each iteration of the algorithm involves (approximately) finding the left and right singular vectors corresponding to the largest singular value of a certain matrix, which can be calculated in linear time. This leads to an algorithm which can scale to large matrices arising in several applications such as matrix completion for collaborative filtering and robust low rank matrix approximation.
|
The algorithm we propose is also related to Hazan's algorithm @cite_15 for solving PSD problems, which in turns relies on Frank-Wolfe algorithm @cite_8 (see Clarkson @cite_19 ), as well as to the follow-up paper of @cite_14 , which applies Hazan's algorithm for optimizing with trace-norm constraints. There are several important changes though. First, we tackle the problem directly and do not enforce neither PSDness of the matrix nor a bounded trace-norm. Second, our algorithm is "fully corrective", that is, it extracts all the information from existing components before adding a new component. These differences between the approaches are analogous to the difference between Frank-Wolfe algorithm and fully corrective greedy selection, for minimizing over sparse vectors, as discussed in @cite_24 . Finally, while each iteration of both methods involves approximately finding leading eigenvectors, in @cite_15 the quality of approximation should improve as the algorithm progresses while our algorithm can always rely on the same constant approximation factor.
|
{
"cite_N": [
"@cite_14",
"@cite_8",
"@cite_24",
"@cite_19",
"@cite_15"
],
"mid": [
"1574851760",
"2136885855",
"1994520254",
"2621075174",
"1775587472"
],
"abstract": [
"Optimization problems with a nuclear norm regularization, such as e.g. low norm matrix factorizations, have seen many applications recently. We propose a new approximation algorithm building upon the recent sparse approximate SDP solver of (Hazan, 2008). The experimental efficiency of our method is demonstrated on large matrix completion problems such as the Netflix dataset. The algorithm comes with strong convergence guarantees, and can be interpreted as a first theoretically justified variant of Simon-Funk-type SVD heuristics. The method is free of tuning parameters, and very easy to parallelize.",
"",
"We study the problem of minimizing the expected loss of a linear predictor while constraining its sparsity, i.e., bounding the number of features used by the predictor. While the resulting optimization problem is generally NP-hard, several approximation algorithms are considered. We analyze the performance of these algorithms, focusing on the characterization of the trade-off between accuracy and sparsity of the learned predictor in different scenarios.",
"The problem of maximizing a concave function f(x) in a simplex S can be solved approximately by a simple greedy algorithm. For given k, the algorithm can find a point x(k) on a k-dimensional face of S, such that f(x(k)) ≥ f(x*) - O(1 k). Here f(x*) is the maximum value of f in S. This algorithm and analysis were known before, and related to problems of statistics and machine learning, such as boosting, regression, and density mixture estimation. In other work, coming from computational geometry, the existence of e-coresets was shown for the minimum enclosing ball problem, by means of a simple greedy algorithm. Similar greedy algorithms, that are special cases of the Frank-Wolfe algorithm, were described for other enclosure problems. Here these results are tied together, stronger convergence results are reviewed, and several coreset bounds are generalized or strengthened.",
"We propose an algorithm for approximately maximizing a concave function over the bounded semi-definite cone, which produces sparse solutions. Sparsity for SDP corresponds to low rank matrices, and is a important property for both computational as well as learning theoretic reasons. As an application, building on Aaronson's recent work, we derive a linear time algorithm for Quantum State Tomography."
]
}
|
1106.1636
|
1877846107
|
Many widely studied graphical models with latent variables lead to nontrivial constraints on the distribution of the observed variables. Inspired by the Bell inequalities in quantum mechanics, we refer to any linear inequality whose violation rules out some latent variable model as a "hidden variable test" for that model. Our main contribution is to introduce a sequence of relaxations which provides progressively tighter hidden variable tests. We demonstrate applicability to mixtures of sequences of i.i.d. variables, Bell inequalities, and homophily models in social networks. For the last, we demonstrate that our method provides a test that is able to rule out latent homophily as the sole explanation for correlations on a real social network that are known to be due to influence.
|
is another algebraic technique that uses Gr "obner bases to find the smallest (set of polynomial equality statements) that contains a semi-algebraic set @cite_9 @cite_4 . Unfortunately, this approach has two limitations. First, it can only find equality constraints, which may not be sufficient for some models. For instance, in the CHSH experiment case, the smallest algebraic variety containing local hidden variable'' models will also contain quantum correlations; whereas, the CHSH inequalities rule out some quantum correlations @cite_22 . The second drawback of the implicitization approach is that its complexity depends on the size of the domains of all the model variables and, in principle, the size of the domain of the latent variable could be infinite.
|
{
"cite_N": [
"@cite_9",
"@cite_4",
"@cite_22"
],
"mid": [
"",
"2101776314",
"62934356"
],
"abstract": [
"",
"We use the implicitization procedure to generate polynomial equality constraints on the set of distributions induced by local interventions on variables governed by a causal Bayesian network with hidden variables. We show how we may reduce the complexity of the implicitization problem and make the problem tractable in certain causal Bayesian networks. We also show some preliminary results on the algebraic structure of polynomial constraints. The results have applications in distinguishing between causal models and in testing causal models with combined observational and experimental data.",
"Nonlocality refers to correlations between spatially separated parties that are stronger than those explained by the existence of local hidden variables. Quantum mechanics is known to allow some nonlocal correlations between particles in a phenomena known as entanglement. We explore several aspects of nonlocality in general and how they relate to quantum mechanics. First, we construct a hierarchy of theories with nonlocal correlations stronger than those allowed in quantum mechanics and derive several results about these theories. We show that these theories include codes that can store an amount of information exponential in the number of physical bits used. We use this result to demonstrate an unphysical consequence of theories with stronger-than-quantum correlations: learning even an approximate description of states in such theories would be practically impossible. Next, we consider the difficult problem of determining whether specific correlations are nonlocal. We present a novel learning algorithm and show that it provides an outer bound on the set of local states, and can therefore be used to identify some nonlocal states. Finally, we put nonlocal correlations to work by showing that the entanglement present in the vacuum of a quantum field can be used to detect spacetime curvature. We quantify how the entangling power of the quantum field varies as a function of spacetime curvature."
]
}
|
1106.1636
|
1877846107
|
Many widely studied graphical models with latent variables lead to nontrivial constraints on the distribution of the observed variables. Inspired by the Bell inequalities in quantum mechanics, we refer to any linear inequality whose violation rules out some latent variable model as a "hidden variable test" for that model. Our main contribution is to introduce a sequence of relaxations which provides progressively tighter hidden variable tests. We demonstrate applicability to mixtures of sequences of i.i.d. variables, Bell inequalities, and homophily models in social networks. For the last, we demonstrate that our method provides a test that is able to rule out latent homophily as the sole explanation for correlations on a real social network that are known to be due to influence.
|
A similar approach to implicitization also allows one to find equality constraints among observed variables in a latent variable graphical model and is given in @cite_15 . This approach only suffers one of implicitization's drawbacks, i.e. the domain of the latent variable could be infinite, but we are still restricted to equality constraints only, which may be insufficient.
|
{
"cite_N": [
"@cite_15"
],
"mid": [
"2950434967"
],
"abstract": [
"The validity OF a causal model can be tested ONLY IF the model imposes constraints ON the probability distribution that governs the generated data. IN the presence OF unmeasured variables, causal models may impose two types OF constraints : conditional independencies, AS READ through the d - separation criterion, AND functional constraints, FOR which no general criterion IS available.This paper offers a systematic way OF identifying functional constraints AND, thus, facilitates the task OF testing causal models AS well AS inferring such models FROM data."
]
}
|
1106.1636
|
1877846107
|
Many widely studied graphical models with latent variables lead to nontrivial constraints on the distribution of the observed variables. Inspired by the Bell inequalities in quantum mechanics, we refer to any linear inequality whose violation rules out some latent variable model as a "hidden variable test" for that model. Our main contribution is to introduce a sequence of relaxations which provides progressively tighter hidden variable tests. We demonstrate applicability to mixtures of sequences of i.i.d. variables, Bell inequalities, and homophily models in social networks. For the last, we demonstrate that our method provides a test that is able to rule out latent homophily as the sole explanation for correlations on a real social network that are known to be due to influence.
|
In the Bayesian graphical model literature, the most studied example of hidden variable tests are the instrumental inequalities, which can be derived using linear programming techniques @cite_20 and were studied in greater detail in @cite_2 . In @cite_23 , they considered the same general question as this paper, namely, a general method for identifying inequality constraints in models with hidden variables. Their approach leads to a specific necessary (but not sufficient) set of inequalities, but there is no way to produce tighter inequalities, or, as in our case, to produce an inequality optimized to rule out a given observation.
|
{
"cite_N": [
"@cite_23",
"@cite_20",
"@cite_2"
],
"mid": [
"1870337627",
"2143891888",
""
],
"abstract": [
"We present a class of inequality constraints on the set of distributions induced by local interventions on variables governed by a causal Bayesian network, in which some of the variables remain unmeasured. We derive bounds on causal effects that are not directly measured in randomized experiments. We derive instrumental inequality type of constraints on nonexperimental distributions. The results have applications in testing causal models with observational or experimental data.",
"1. Introduction to probabilities, graphs, and causal models 2. A theory of inferred causation 3. Causal diagrams and the identification of causal effects 4. Actions, plans, and direct effects 5. Causality and structural models in the social sciences 6. Simpson's paradox, confounding, and collapsibility 7. Structural and counterfactual models 8. Imperfect experiments: bounds and counterfactuals 9. Probability of causation: interpretation and identification Epilogue: the art and science of cause and effect.",
""
]
}
|
1106.0346
|
1600782690
|
Twitter is used for a variety of reasons, including informa- tion dissemination, marketing, political organizing and to spread propaganda, spamming, promotion, conversations, and so on. Characterizing these activities and categoriz- ing associated user generated content is a challenging task. We present a information-theoretic approach to classica- tion of user activity on Twitter. We focus on tweets that contain embedded URLs and study their collective 'retweet- ing' dynamics. We identify two features, time-interval and user entropy, which we use to classify retweeting activity. We achieve good separation of dierent activities using just these two features and are able to categorize content based on the collective user response it generates. We have identi- ed ve distinct categories of retweeting activity on Twitter: automatic robotic activity, newsworthy information dissem- ination, advertising and promotion, campaigns, and para- sitic advertisement. In the course of our investigations, we have shown how Twitter can be exploited for promotional and spam-like activities. The content-independent, entropy- based activity classication method is computationally e- cient, scalable and robust to sampling and missing data. It has many applications, including automatic spam-detection, trend identication, trust management, user-modeling, so- cial search and content classication on online social media.
|
There has been some work to define temporal variation on online social media. In @cite_11 , the authors enumerate the different approximate shapes of temporal distribution of content in Twitter. But unlike us, they are not able to associate semantic meaning to the clusters they observe.
|
{
"cite_N": [
"@cite_11"
],
"mid": [
"2112056172"
],
"abstract": [
"Online content exhibits rich temporal dynamics, and diverse realtime user generated content further intensifies this process. However, temporal patterns by which online content grows and fades over time, and by which different pieces of content compete for attention remain largely unexplored. We study temporal patterns associated with online content and how the content's popularity grows and fades over time. The attention that content receives on the Web varies depending on many factors and occurs on very different time scales and at different resolutions. In order to uncover the temporal dynamics of online content we formulate a time series clustering problem using a similarity metric that is invariant to scaling and shifting. We develop the K-Spectral Centroid (K-SC) clustering algorithm that effectively finds cluster centroids with our similarity measure. By applying an adaptive wavelet-based incremental approach to clustering, we scale K-SC to large data sets. We demonstrate our approach on two massive datasets: a set of 580 million Tweets, and a set of 170 million blog posts and news media articles. We find that K-SC outperforms the K-means clustering algorithm in finding distinct shapes of time series. Our analysis shows that there are six main temporal shapes of attention of online content. We also present a simple model that reliably predicts the shape of attention by using information about only a small number of participants. Our analyses offer insight into common temporal patterns of the content on theWeb and broaden the understanding of the dynamics of human attention."
]
}
|
1106.0346
|
1600782690
|
Twitter is used for a variety of reasons, including informa- tion dissemination, marketing, political organizing and to spread propaganda, spamming, promotion, conversations, and so on. Characterizing these activities and categoriz- ing associated user generated content is a challenging task. We present a information-theoretic approach to classica- tion of user activity on Twitter. We focus on tweets that contain embedded URLs and study their collective 'retweet- ing' dynamics. We identify two features, time-interval and user entropy, which we use to classify retweeting activity. We achieve good separation of dierent activities using just these two features and are able to categorize content based on the collective user response it generates. We have identi- ed ve distinct categories of retweeting activity on Twitter: automatic robotic activity, newsworthy information dissem- ination, advertising and promotion, campaigns, and para- sitic advertisement. In the course of our investigations, we have shown how Twitter can be exploited for promotional and spam-like activities. The content-independent, entropy- based activity classication method is computationally e- cient, scalable and robust to sampling and missing data. It has many applications, including automatic spam-detection, trend identication, trust management, user-modeling, so- cial search and content classication on online social media.
|
Previous work has tried to estimate the quality or interestingness of content @cite_19 @cite_9 . However, quality or interestingness is a subjective measure and is biased by the perspective of the user. For instance, what would be high quality information or interesting to a campaigner might be junk to a news aggregator. Therefore there is the need for an objective quantitative measure of user-generated content. Our entropy-based approach for classifying user activity and content addresses this need. While the method described in @cite_9 is similar in spirit to ours, it can discover only three classes of activity. Heterogeneous activity on Twitter requires more than three classes.
|
{
"cite_N": [
"@cite_19",
"@cite_9"
],
"mid": [
"2037858832",
"2406592771"
],
"abstract": [
"The quality of user-generated content varies drastically from excellent to abuse and spam. As the availability of such content increases, the task of identifying high-quality content sites based on user contributions --social media sites -- becomes increasingly important. Social media in general exhibit a rich variety of information sources: in addition to the content itself, there is a wide array of non-content information available, such as links between items and explicit quality ratings from members of the community. In this paper we investigate methods for exploiting such community feedback to automatically identify high quality content. As a test case, we focus on Yahoo! Answers, a large community question answering portal that is particularly rich in the amount and types of content and social interactions available in it. We introduce a general classification framework for combining the evidence from different sources of information, that can be tuned automatically for a given social media type and quality definition. In particular, for the community question answering domain, we show that our system is able to separate high-quality items from the rest with an accuracy close to that of humans",
"With the rise of web 2.0 there is an ever-expanding source of interesting media because of the proliferation of usergenerated content. However, mixed in with this is a large amount of noise that creates a proverbial “needle in the haystack” when searching for relevant content. Although there is hope that the rich network of interwoven metadata may contain enough structure to eventually help sift through this noise, currently many sites serve up only the “most popular” things. Identifying only the most popular items can be useful, but doing so fails to take into account the famous “long tail” behavior of the web—the notion that the collective effect of small, niche interests can outweigh the market share of the few blockbuster (i.e. most-popular) items—thus providing only content that has mass appeal and masking the interests of the idiosyncratic many. YouTube, for example, hosts over 40 million videos— enough content to keep one occupied for more than 200 years. Are there intelligent tools to search through this information-rich environment and identify interesting and relevant content? Is there a way to identify emerging trends or “hot topics” in addition to indexing the long tail for content that has real value?"
]
}
|
1106.0346
|
1600782690
|
Twitter is used for a variety of reasons, including informa- tion dissemination, marketing, political organizing and to spread propaganda, spamming, promotion, conversations, and so on. Characterizing these activities and categoriz- ing associated user generated content is a challenging task. We present a information-theoretic approach to classica- tion of user activity on Twitter. We focus on tweets that contain embedded URLs and study their collective 'retweet- ing' dynamics. We identify two features, time-interval and user entropy, which we use to classify retweeting activity. We achieve good separation of dierent activities using just these two features and are able to categorize content based on the collective user response it generates. We have identi- ed ve distinct categories of retweeting activity on Twitter: automatic robotic activity, newsworthy information dissem- ination, advertising and promotion, campaigns, and para- sitic advertisement. In the course of our investigations, we have shown how Twitter can be exploited for promotional and spam-like activities. The content-independent, entropy- based activity classication method is computationally e- cient, scalable and robust to sampling and missing data. It has many applications, including automatic spam-detection, trend identication, trust management, user-modeling, so- cial search and content classication on online social media.
|
Most of the existing spam detection @cite_6 and trust management systems @cite_13 are based on content and structure but do not look at collective dynamics. Besides, they usually require additional constraints like labelled up-to-date annotation of resources, access to content and cooperation of search engine. Satisfiability of so many constraints is difficult especially when one takes the diversity and astronomical size of online social media into account. Our method on the other hand, while having no such constraints, may be able to detect spams with an accuracy close to humans.
|
{
"cite_N": [
"@cite_13",
"@cite_6"
],
"mid": [
"2101077364",
"2128509431"
],
"abstract": [
"Web 2.0 promises rich opportunities for information sharing, electronic commerce, and new modes of social interaction, all centered around the \"social Web\" of user-contributed content, social annotations, and person-to-person social connections. But the increasing reliance on this \"social Web\" also places individuals and their computer systems at risk, creating opportunities for malicious participants to exploit the tight social fabric of these networks. With these problems in mind, we propose the SocialTrust framework for tamper-resilient trust establishment in online communities. SocialTrust provides community users with dynamic trust values by (i) distinguishing relationship quality from trust; (ii) incorporating a personalized feedback mechanism for adapting as the community evolves; and (iii) tracking user behavior. We experimentally evaluate the SocialTrust framework using real online social networking data consisting of millions of MySpace profiles and relationships. We find that SocialTrust supports robust trust establishment even in the presence of large-scale collusion by malicious participants.",
"The popularity of social bookmarking sites has made them prime targets for spammers. Many of these systems require an administrator's time and energy to manually filter or remove spam. Here we discuss the motivations of social spam, and present a study of automatic detection of spammers in a social tagging system. We identify and analyze six distinct features that address various properties of social spam, finding that each of these features provides for a helpful signal to discriminate spammers from legitimate users. These features are then used in various machine learning algorithms for classification, achieving over 98 accuracy in detecting social spammers with 2 false positives. These promising results provide a new baseline for future efforts on social spam. We make our dataset publicly available to the research community."
]
}
|
1106.0346
|
1600782690
|
Twitter is used for a variety of reasons, including informa- tion dissemination, marketing, political organizing and to spread propaganda, spamming, promotion, conversations, and so on. Characterizing these activities and categoriz- ing associated user generated content is a challenging task. We present a information-theoretic approach to classica- tion of user activity on Twitter. We focus on tweets that contain embedded URLs and study their collective 'retweet- ing' dynamics. We identify two features, time-interval and user entropy, which we use to classify retweeting activity. We achieve good separation of dierent activities using just these two features and are able to categorize content based on the collective user response it generates. We have identi- ed ve distinct categories of retweeting activity on Twitter: automatic robotic activity, newsworthy information dissem- ination, advertising and promotion, campaigns, and para- sitic advertisement. In the course of our investigations, we have shown how Twitter can be exploited for promotional and spam-like activities. The content-independent, entropy- based activity classication method is computationally e- cient, scalable and robust to sampling and missing data. It has many applications, including automatic spam-detection, trend identication, trust management, user-modeling, so- cial search and content classication on online social media.
|
Automated email spamming has been studied by @cite_7 . They have identified the activity of botnets generating e-mail spam as being bursty' (inferred from the duration of activity) and specific' (pertaining to a random generated URL matching the signature). In this study of Twitter, we identify automated activity by a set pattern of retweeting (indicated by much lower time-interval entropy compared to user entropy).
|
{
"cite_N": [
"@cite_7"
],
"mid": [
"2112063328"
],
"abstract": [
"In this paper, we focus on characterizing spamming botnets by leveraging both spam payload and spam server traffic properties. Towards this goal, we developed a spam signature generation framework called AutoRE to detect botnet-based spam emails and botnet membership. AutoRE does not require pre-classified training data or white lists. Moreover, it outputs high quality regular expression signatures that can detect botnet spam with a low false positive rate. Using a three-month sample of emails from Hotmail, AutoRE successfully identified 7,721 botnet-based spam campaigns together with 340,050 unique botnet host IP addresses. Our in-depth analysis of the identified botnets revealed several interesting findings regarding the degree of email obfuscation, properties of botnet IP addresses, sending patterns, and their correlation with network scanning traffic. We believe these observations are useful information in the design of botnet detection schemes."
]
}
|
1106.0346
|
1600782690
|
Twitter is used for a variety of reasons, including informa- tion dissemination, marketing, political organizing and to spread propaganda, spamming, promotion, conversations, and so on. Characterizing these activities and categoriz- ing associated user generated content is a challenging task. We present a information-theoretic approach to classica- tion of user activity on Twitter. We focus on tweets that contain embedded URLs and study their collective 'retweet- ing' dynamics. We identify two features, time-interval and user entropy, which we use to classify retweeting activity. We achieve good separation of dierent activities using just these two features and are able to categorize content based on the collective user response it generates. We have identi- ed ve distinct categories of retweeting activity on Twitter: automatic robotic activity, newsworthy information dissem- ination, advertising and promotion, campaigns, and para- sitic advertisement. In the course of our investigations, we have shown how Twitter can be exploited for promotional and spam-like activities. The content-independent, entropy- based activity classication method is computationally e- cient, scalable and robust to sampling and missing data. It has many applications, including automatic spam-detection, trend identication, trust management, user-modeling, so- cial search and content classication on online social media.
|
We can automatically detect newsworthy, information-rich content and separate it from other user-generated content, based on user-response. We have showed that this method can further categorize content within this class into blogs or celebrity websites and news. @cite_21 study the flow of information between these sub-categories.
|
{
"cite_N": [
"@cite_21"
],
"mid": [
"2112896229"
],
"abstract": [
"We study several longstanding questions in media communications research, in the context of the microblogging service Twitter, regarding the production, flow, and consumption of information. To do so, we exploit a recently introduced feature of Twitter known as \"lists\" to distinguish between elite users - by which we mean celebrities, bloggers, and representatives of media outlets and other formal organizations - and ordinary users. Based on this classification, we find a striking concentration of attention on Twitter, in that roughly 50 of URLs consumed are generated by just 20K elite users, where the media produces the most information, but celebrities are the most followed. We also find significant homophily within categories: celebrities listen to celebrities, while bloggers listen to bloggers etc; however, bloggers in general rebroadcast more information than the other categories. Next we re-examine the classical \"two-step flow\" theory of communications, finding considerable support for it on Twitter. Third, we find that URLs broadcast by different categories of users or containing different types of content exhibit systematically different lifespans. And finally, we examine the attention paid by the different user categories to different news topics."
]
}
|
1106.0478
|
2950825749
|
This paper presents a semantics of self-adjusting computation and proves that the semantics are correct and consistent. The semantics integrate change propagation with the classic idea of memoization to enable reuse of computations under mutation to memory. During evaluation, reuse of a computation via memoization triggers a change propagation that adjusts the reused computation to reflect the mutated memory. Since the semantics integrate memoization and change-propagation, it involves both non-determinism (due to memoization) and mutation (due to change propagation). Our consistency theorem states that the non-determinism is not harmful: any two evaluations of the same program starting at the same state yield the same result. Our correctness theorem states that mutation is not harmful: self-adjusting programs are consistent with purely functional programming. We formalize the semantics and their meta-theory in the LF logical framework and machine check our proofs using Twelf.
|
Dependence graphs record the dependencies between data in a computation and rely on a change-propagation algorithm to update the computation when the input is modified (e.g., @cite_9 @cite_17 ). Dependence graphs are effective in some applications, e.g., syntax-directed computations but are not general-purpose because change propagation does not update the dependence structure. Memoization (also called function caching) (e.g., @cite_15 @cite_3 @cite_16 ) applies to any purely functional program and therefore is more broadly applicable than static dependence graphs. This classic idea dating back to the late 1950's @cite_20 @cite_1 @cite_0 yields efficient incremental computations when executions of a program with similar inputs perform similar function calls. It turns out, however, that even a small input modifications can prevent reuse via memoization, e.g., when they affect computations deep in the call tree @cite_5 . Partial evaluation based approaches @cite_11 @cite_7 require the user to fix a partition of the input and specialize the program to speedup modifications to unfixed part faster. The main limitation of this approach is that it allows input modifications only within a predetermined partition.
|
{
"cite_N": [
"@cite_11",
"@cite_7",
"@cite_9",
"@cite_1",
"@cite_3",
"@cite_0",
"@cite_5",
"@cite_15",
"@cite_16",
"@cite_20",
"@cite_17"
],
"mid": [
"2007009457",
"2014832211",
"",
"2166089338",
"",
"1965784613",
"2171901695",
"2035829578",
"1999707021",
"",
""
],
"abstract": [
"It is a common occurrence in a programming environment to apply a software tool to a series of similar inputs. Examples include compilers, interpreters, text formatters, etc., whose inputs are usually incrementally modifed text files. Thus programming environment researchers have recognized the importance of building incremental versions of these tools — i.e. ones which can efficiently update the result of a computation when the input changes only slightly.",
"An incremental algorithm is one that takes advantage of the fact that the function it computes is to be evaluated repeatedly on inputs that differ only slightly from one another, avoiding unnecessary duplication of common computations. We define here a new notion of incrementality for reduction in the untyped λ-calculus and describe an incremental reduction algorithm, Λ inc . We show that Λ inc has the desirable property of performing non-overlapping reductions on related terms, yet is simple enough to allow a practical implementation. The algorithm is based on a novel λ-reduction strategy that may prove useful in a non-incremental setting as well. Incremental λ-reduction can be used to advantage in any setting where an algorithm is specified in a functional or applicative manner.",
"",
"Publisher Summary This chapter discusses the mathematical theory of computation. Computation essentially explores how machines can be made to carry out intellectual processes. Any intellectual process that can be carried out mechanically can be performed by a general purpose digital computer. There are three established directions of mathematical research that are relevant to the science of computation—namely, numerical analysis, theory of computability, and theory of finite automata. The chapter explores what practical results can be expected from a suitable mathematical theory. Further, the chapter presents several descriptive formalisms with a few examples of their use and theories that enable to prove the equivalence of computations expressed in these formalisms. A few mathematical results about the properties of the formalisms are also presented.",
"",
"It would be useful if computers could learn from experience and thus automatically improve the efficiency of their own programs during execution. A simple but effective rote-learning facility can be provided within the framework of a suitable programming language.",
"Dependence graphs and memoization can be used to efficiently update the output of a program as the input changes dynamically. Recent work has studied techniques for combining these approaches to effectively dynamize a wide range of applications. Toward this end various theoretical results were given. In this paper we describe the implementation of a library based on these ideas, and present experimental results on the efficiency of this library on a variety of applications. The results of the experiments indicate that the approach is effective in practice, often requiring orders of magnitude less time than recomputing the output from scratch. We believe this is the first experimental evidence that incremental computation of any type is effective in practice for a reasonably broad set of applications.",
"",
"We address the problem of dependency analysis and caching in the context of the l-calculus. The dependencies of a l-term are (roughly) the parts of the l-term that contribute to the result of evaluating it. We introduce a mechanism for keeping track of dependencies, and discuss how to use these dependencies in caching.",
"",
""
]
}
|
1106.0478
|
2950825749
|
This paper presents a semantics of self-adjusting computation and proves that the semantics are correct and consistent. The semantics integrate change propagation with the classic idea of memoization to enable reuse of computations under mutation to memory. During evaluation, reuse of a computation via memoization triggers a change propagation that adjusts the reused computation to reflect the mutated memory. Since the semantics integrate memoization and change-propagation, it involves both non-determinism (due to memoization) and mutation (due to change propagation). Our consistency theorem states that the non-determinism is not harmful: any two evaluations of the same program starting at the same state yield the same result. Our correctness theorem states that mutation is not harmful: self-adjusting programs are consistent with purely functional programming. We formalize the semantics and their meta-theory in the LF logical framework and machine check our proofs using Twelf.
|
The semantics proposed here achieve efficient incremental computation by integrating a previous generalization of dependence graphs that allow change propagation to modify the dependence structure @cite_2 with memoization. Specifically it permits change propagation algorithm to re-use computations, even after the computation state is modified via mutations to memory. In contrast, conventional memoization permits re-use of the (unchanged) results of computations.
|
{
"cite_N": [
"@cite_2"
],
"mid": [
"1971597822"
],
"abstract": [
"We present techniques for incremental computing by introducing adaptive functional programming. As an adaptive program executes, the underlying system represents the data and control dependences in the execution in the form of a dynamic dependence graph. When the input to the program changes, a change propagation algorithm updates the output and the dynamic dependence graph by propagating changes through the graph and re-executing code where necessary. Adaptive programs adapt their output to any change in the input, small or large.We show that adaptivity techniques are practical by giving an efficient implementation as a small ML library. The library consists of three operations for making a program adaptive, plus two operations for making changes to the input and adapting the output to these changes. We give a general bound on the time it takes to adapt the output, and based on this, show that an adaptive Quicksort adapts its output in logarithmic time when its input is extended by one key.To show the safety and correctness of the mechanism we give a formal definition of AFL, a call-by-value functional language extended with adaptivity primitives. The modal type system of AFL enforces correct usage of the adaptivity mechanism, which can only be checked at run time in the ML library. Based on the AFL dynamic semantics, we formalize thechange-propagation algorithm and prove its correctness."
]
}
|
1105.5861
|
2951074389
|
This paper considers the optimum single cell power-control maximizing the aggregate (uplink) communication rate of the cell when there are peak power constraints at mobile users, and a low-complexity data decoder (without successive decoding) at the base station. It is shown, via the theory of majorization, that the optimum power allocation is binary, which means links are either "on" or "off". By exploiting further structure of the optimum binary power allocation, a simple polynomial-time algorithm for finding the optimum transmission power allocation is proposed, together with a reduced complexity near-optimal heuristic algorithm. Sufficient conditions under which channel-state aware time-division-multiple-access (TDMA) maximizes the aggregate communication rate are established. Finally, a numerical study is performed to compare and contrast the performance achieved by the optimum binary power-control policy with other sub-optimum policies and the throughput capacity achievable via successive decoding. It is observed that two dominant modes of communication arise, wideband or TDMA, and that successive decoding achieves better sum-rates only under near-perfect interference cancellation efficiency.
|
In this paper, we are motivated by recent work on interference networks that shows that binary power-control is often close to optimal when interference is treated as Gaussian noise, links have maximum (peak) power constraints, and the objective is to maximize the sum-rate, even if it is not necessarily optimal in general @cite_18 . Binary'' here just means that a link is either on'' or off'', either at zero power, or maximum power, without taking any value in the continuum of possible values between @math and the peak power level.
|
{
"cite_N": [
"@cite_18"
],
"mid": [
"2150586017"
],
"abstract": [
"We consider allocating the transmit powers for a wireless multi-link (N-link) system, in order to maximize the total system throughput under interference and noise impairments, and short term power constraints. Employing dynamic spectral reuse, we allow for centralized control. In the two-link case, the optimal power allocation then has a remarkably simple nature termed binary power control: depending on the noise and channel gains, assign full power to one link and minimum to the other, or full power on both. Binary power control (BPC) has the advantage of leading towards simpler or even distributed power control algorithms. For N>2 we propose a strategy based on checking the corners of the domain resulting from the power constraints to perform BPC. We identify scenarios in which binary power allocation can be proven optimal also for arbitrary N. Furthermore, in the general setting for N>2, simulations demonstrate that a throughput performance with negligible loss, compared to the best non-binary scheme found by geometric programming, can be obtained by BPC. Finally, to reduce the complexity of optimal binary power allocation for large networks, we provide simple algorithms achieving 99 of the capacity promised by exhaustive binary search."
]
}
|
1105.5861
|
2951074389
|
This paper considers the optimum single cell power-control maximizing the aggregate (uplink) communication rate of the cell when there are peak power constraints at mobile users, and a low-complexity data decoder (without successive decoding) at the base station. It is shown, via the theory of majorization, that the optimum power allocation is binary, which means links are either "on" or "off". By exploiting further structure of the optimum binary power allocation, a simple polynomial-time algorithm for finding the optimum transmission power allocation is proposed, together with a reduced complexity near-optimal heuristic algorithm. Sufficient conditions under which channel-state aware time-division-multiple-access (TDMA) maximizes the aggregate communication rate are established. Finally, a numerical study is performed to compare and contrast the performance achieved by the optimum binary power-control policy with other sub-optimum policies and the throughput capacity achievable via successive decoding. It is observed that two dominant modes of communication arise, wideband or TDMA, and that successive decoding achieves better sum-rates only under near-perfect interference cancellation efficiency.
|
In addition to @cite_18 , some other works such as @cite_6 , @cite_21 and @cite_3 also motivate us to investigate the optimality of binary power-control. Both @cite_6 and @cite_21 consider jointly optimal allocation of rates and transmission powers in CDMA networks under alternative objectives such as maximization of the sum of signal-to-interference-plus-noise-ratios ( @math ) @cite_6 and the packet success probability @cite_21 . Both approaches convert the problem into a convex optimization problem, and show that the optimum power-control is indeed binary under such approximations. In @cite_3 , the authors proved the optimality of an almost binary power-control strategy, up to one exceptional transmission power level in the continuum between @math and the peak power level, maximizing the total uplink communication rate.
|
{
"cite_N": [
"@cite_18",
"@cite_21",
"@cite_3",
"@cite_6"
],
"mid": [
"2150586017",
"2045996972",
"1974592452",
"2063449729"
],
"abstract": [
"We consider allocating the transmit powers for a wireless multi-link (N-link) system, in order to maximize the total system throughput under interference and noise impairments, and short term power constraints. Employing dynamic spectral reuse, we allow for centralized control. In the two-link case, the optimal power allocation then has a remarkably simple nature termed binary power control: depending on the noise and channel gains, assign full power to one link and minimum to the other, or full power on both. Binary power control (BPC) has the advantage of leading towards simpler or even distributed power control algorithms. For N>2 we propose a strategy based on checking the corners of the domain resulting from the power constraints to perform BPC. We identify scenarios in which binary power allocation can be proven optimal also for arbitrary N. Furthermore, in the general setting for N>2, simulations demonstrate that a throughput performance with negligible loss, compared to the best non-binary scheme found by geometric programming, can be obtained by BPC. Finally, to reduce the complexity of optimal binary power allocation for large networks, we provide simple algorithms achieving 99 of the capacity promised by exhaustive binary search.",
"This paper addresses the problem of dynamic resource allocation in a multiservice direct-sequence code-division multiple-access (DS-CDMA) wireless network supporting real-time (RT) and nonreal-time (NRT) communication services. For RT users, a simple transmission power allocation strategy is assumed that maximizes the amount of capacity available to NRT users without violating quality of service requirements of RT users. For NRT users, a joint transmission power and spreading gain (transmission rate) allocation strategy, obtained via the solution of a constrained optimization problem, is provided. The solution maximizes the aggregate NRT throughput, subject to peak transmission power constraints and the capacity constraint imposed by RT users. The optimization problem is solved in a closed form, and the resulting resource allocation strategy is simple to implement as a hybrid CDMA time-division multiple-access strategy. Numerical results are presented showing that the optimal resource allocation strategy can offer substantial performance gains over other conventional resource allocation strategies for DS-CDMA networks.",
"The information-theoretic sum capacity of reverse link CDMA systems with QoS constraints is investigated in this paper. Since the reverse link of CDMA systems are, for a given channel and noise conditions, interference-limited, the sum capacity can be achieved by optimally allocating the transmit powers of the mobile stations with the optimal (Shannon) coding. Unfortunately, the sum capacity is usually achieved via unfair resource allocation. This can be avoided by imposing QoS constraints on the system. The results here show that for a single cell system, the sum capacity can be achieved while meeting the QoS constraints with a semi-bang-bang power allocation strategy. Numerical results are then presented to show the multi-user diversity gain and the impact of QoS constraints. The implication of TDM operation in a practical reverse link CDMA system is also discussed.",
"We determine the optimal adaptive rate and power control strategies to maximize the total throughput in a multirate code-division multiple-access system. The total throughput of the system provides a meaningful baseline in the form of an upper bound to the throughput achievable with additional restrictions imposed on the system to guarantee fairness. Peak power and instantaneous bit energy-to-noise spectral density constraints are assumed at the transmitter with matched filter detection at the receiver. Our results apply to frequency selective fading in so far as the bit energy-to-equivalent noise power spectral density ratio definition can be used as the quality-of-service metric. The bit energy-to-equivalent noise power spectral density ratio metric coincides with the bit-error rate metric under the assumption that the processing gains and the number of users are high enough so that self-interference can be neglected. We first obtain results for the case where the rates available to each user are unrestricted, and we then consider the more practical scenario where each user has a finite discrete set of rates. An upper bound to the maximum average throughput is obtained and evaluated for Rayleigh fading. Suboptimal low-complexity schemes are considered to illustrate the performance tradeoffs between optimality and complexity. We also show that the optimum rate and power adaptation scheme with unconstrained rates is in fact just a rate adaptation scheme with fixed transmit powers, and it performs significantly better than a scheme that uses power adaptation alone."
]
}
|
1105.5861
|
2951074389
|
This paper considers the optimum single cell power-control maximizing the aggregate (uplink) communication rate of the cell when there are peak power constraints at mobile users, and a low-complexity data decoder (without successive decoding) at the base station. It is shown, via the theory of majorization, that the optimum power allocation is binary, which means links are either "on" or "off". By exploiting further structure of the optimum binary power allocation, a simple polynomial-time algorithm for finding the optimum transmission power allocation is proposed, together with a reduced complexity near-optimal heuristic algorithm. Sufficient conditions under which channel-state aware time-division-multiple-access (TDMA) maximizes the aggregate communication rate are established. Finally, a numerical study is performed to compare and contrast the performance achieved by the optimum binary power-control policy with other sub-optimum policies and the throughput capacity achievable via successive decoding. It is observed that two dominant modes of communication arise, wideband or TDMA, and that successive decoding achieves better sum-rates only under near-perfect interference cancellation efficiency.
|
Majorization theory and Schur-convex concave structures were also successfully utilized in some previous works, including @cite_5 , @cite_12 , @cite_22 and @cite_9 , to answer important questions in communications theory. This paper is another successful application of majorization theory to prove the optimality of binary power-control.
|
{
"cite_N": [
"@cite_5",
"@cite_9",
"@cite_22",
"@cite_12"
],
"mid": [
"2049285480",
"2134233838",
"2123074269",
"2003010976"
],
"abstract": [
"Multiple-input multiple-output (MIMO) channels provide an abstract and unified representation of different physical communication systems, ranging from multi-antenna wireless channels to wireless digital subscriber line systems. They have the key property that several data streams can be simultaneously established. In general, the design of communication systems for MIMO channels is quite involved (if one can assume the use of sufficiently long and good codes, then the problem formulation simplifies drastically). The first difficulty lies on how to measure the global performance of such systems given the tradeoff on the performance among the different data streams. Once the problem formulation is defined, the resulting mathematical problem is typically too complicated to be optimally solved as it is a matrix-valued nonconvex optimization problem. This design problem has been studied for the past three decades (the first papers dating back to the 1970s) motivated initially by cable systems and more recently by wireless multi-antenna systems. The approach was to choose a specific global measure of performance and then to design the system accordingly, either optimally or suboptimally, depending on the difficulty of the problem. This text presents an up-to-date unified mathematical framework for the design of point-to-point MIMO transceivers with channel state information at both sides of the link according to an arbitrary cost function as a measure of the system performance. In addition, the framework embraces the design of systems with given individual performance on the data streams. Majorization theory is the underlying mathematical theory on which the framework hinges. It allows the transformation of the originally complicated matrix-valued nonconvex problem into a simple scalar problem. In particular, the additive majorization relation plays a key role in the design of linear MIMO transceivers (i.e., a linear precoder at the transmitter and a linear equalizer at the receiver), whereas the multiplicative majorization relation is the basis for nonlinear decision-feedback MIMO transceivers (i.e., a linear precoder at the transmitter and a decision-feedback equalizer at the receiver).",
"There has been intense effort in the past decade to develop multiuser receiver structures which mitigate interference between users in spread-spectrum systems. While much of this research is performed at the physical layer, the appropriate power control and choice of signature sequences in conjunction with multiuser receivers and the resulting network user capacity is not well understood. In this paper we will focus on a single cell and consider both the uplink and downlink scenarios and assume a synchronous CDMA (S-CDMA) system. We characterize the user capacity of a single cell with the optimal linear receiver (MMSE receiver). The user capacity of the system is the maximum number of users per unit processing gain admissible in the system such that each user has its quality-of-service (QoS) requirement (expressed in terms of its desired signal-to-interference ratio) met. This characterization allows one to describe the user capacity through a simple effective bandwidth characterization: users are allowed in the system if and only if the sum of their effective bandwidths is less than the processing gain of the system. The effective bandwidth of each user is a simple monotonic function of its QoS requirement. We identify the optimal signature sequences and power control strategies so that the users meet their QoS requirement. The optimality is in the sense of minimizing the sum of allocated powers. It turns out that with this optimal allocation of signature sequences and powers, the linear MMSE receiver is just the corresponding matched filter for each user. We also characterize the effect of transmit power constraints on the user capacity.",
"We consider direct sequence code division multiple access (DS-CDMA), modeling interference from users communicating with neighboring base stations by additive colored noise. We consider two types of receiver structures: first we consider the information-theoretically optimal receiver and use the sum capacity of the channel as our performance measure. Second, we consider the linear minimum mean square error (LMMSE) receiver and use the signal-to-interference ratio (SIR) of the estimate of the symbol transmitted as our performance measure. Our main result is a constructive characterization of the possible performance in both these scenarios. A central contribution of this characterization is the derivation of a qualitative feature of the optimal performance measure in both the scenarios studied. We show that the sum capacity is a saddle function: it is convex in the additive noise covariances and concave in the user received powers. In the linear receiver case, we show that the mini average power required to meet a set of target performance requirements of the users is a saddle function: it is convex in the additive noise covariances and concave in the set of performance requirements.",
"The sum capacity of a multiuser synchronous CDMA system is completely characterized in the general case of asymmetric user power constraints-this solves the open problem posed by Rupf and Massey (see ibid., vol.40, p.1261-6, 1994) which had solved the equal power constraint case. We identify the signature sequences with real components that achieve sum capacity and indicate a simple recursive algorithm to construct them."
]
}
|
1105.5861
|
2951074389
|
This paper considers the optimum single cell power-control maximizing the aggregate (uplink) communication rate of the cell when there are peak power constraints at mobile users, and a low-complexity data decoder (without successive decoding) at the base station. It is shown, via the theory of majorization, that the optimum power allocation is binary, which means links are either "on" or "off". By exploiting further structure of the optimum binary power allocation, a simple polynomial-time algorithm for finding the optimum transmission power allocation is proposed, together with a reduced complexity near-optimal heuristic algorithm. Sufficient conditions under which channel-state aware time-division-multiple-access (TDMA) maximizes the aggregate communication rate are established. Finally, a numerical study is performed to compare and contrast the performance achieved by the optimum binary power-control policy with other sub-optimum policies and the throughput capacity achievable via successive decoding. It is observed that two dominant modes of communication arise, wideband or TDMA, and that successive decoding achieves better sum-rates only under near-perfect interference cancellation efficiency.
|
In @cite_5 , the authors focus on the transceiver design for point-to-point multiple-input-multiple-output (MIMO) communication systems. By using extra degrees of freedoms provided by multiple transmitter and receiver antennas, and assuming either minimum mean-square error (MMSE) receiver or zero-forcing receiver, they show that the optimum linear precoder at the transmitter is the one diagonilazing the channels ( i.e., independent noise at all channels and no interference among them) when the cost function to be minimized is Schur-concave (or, the objective function to be maximized is Schur-convex). Their results do not directly apply to the our problem since we consider the sum-rate maximization in the presence of interfering links in this paper. In fact, we solve a special case of an open problem posed in @cite_5 in chapter 5 on the optimum design of transceivers for the MIMO multiple-access channel.
|
{
"cite_N": [
"@cite_5"
],
"mid": [
"2049285480"
],
"abstract": [
"Multiple-input multiple-output (MIMO) channels provide an abstract and unified representation of different physical communication systems, ranging from multi-antenna wireless channels to wireless digital subscriber line systems. They have the key property that several data streams can be simultaneously established. In general, the design of communication systems for MIMO channels is quite involved (if one can assume the use of sufficiently long and good codes, then the problem formulation simplifies drastically). The first difficulty lies on how to measure the global performance of such systems given the tradeoff on the performance among the different data streams. Once the problem formulation is defined, the resulting mathematical problem is typically too complicated to be optimally solved as it is a matrix-valued nonconvex optimization problem. This design problem has been studied for the past three decades (the first papers dating back to the 1970s) motivated initially by cable systems and more recently by wireless multi-antenna systems. The approach was to choose a specific global measure of performance and then to design the system accordingly, either optimally or suboptimally, depending on the difficulty of the problem. This text presents an up-to-date unified mathematical framework for the design of point-to-point MIMO transceivers with channel state information at both sides of the link according to an arbitrary cost function as a measure of the system performance. In addition, the framework embraces the design of systems with given individual performance on the data streams. Majorization theory is the underlying mathematical theory on which the framework hinges. It allows the transformation of the originally complicated matrix-valued nonconvex problem into a simple scalar problem. In particular, the additive majorization relation plays a key role in the design of linear MIMO transceivers (i.e., a linear precoder at the transmitter and a linear equalizer at the receiver), whereas the multiplicative majorization relation is the basis for nonlinear decision-feedback MIMO transceivers (i.e., a linear precoder at the transmitter and a decision-feedback equalizer at the receiver)."
]
}
|
1105.5861
|
2951074389
|
This paper considers the optimum single cell power-control maximizing the aggregate (uplink) communication rate of the cell when there are peak power constraints at mobile users, and a low-complexity data decoder (without successive decoding) at the base station. It is shown, via the theory of majorization, that the optimum power allocation is binary, which means links are either "on" or "off". By exploiting further structure of the optimum binary power allocation, a simple polynomial-time algorithm for finding the optimum transmission power allocation is proposed, together with a reduced complexity near-optimal heuristic algorithm. Sufficient conditions under which channel-state aware time-division-multiple-access (TDMA) maximizes the aggregate communication rate are established. Finally, a numerical study is performed to compare and contrast the performance achieved by the optimum binary power-control policy with other sub-optimum policies and the throughput capacity achievable via successive decoding. It is observed that two dominant modes of communication arise, wideband or TDMA, and that successive decoding achieves better sum-rates only under near-perfect interference cancellation efficiency.
|
In @cite_12 , the authors focus on the design of capacity achieving spreading code sequences for the CDMA multiple-access channel without fading. They allow multi-user detection for joint processing of users. Even though the performance figure of merit we are interested in this paper is also related to the information capacity, our problem set-up is different than the set-up in @cite_12 . In this paper, we look at the capacity achieving transmission power allocations, rather than the optimum spreading code sequence design, for Fading Gaussian channels in the presence of interfering links. For example, our objective sum-rate function is Schur-convex whereas it is Schur-concave in @cite_12 . In @cite_22 , the same authors extend the analysis in @cite_12 to the case of colored noise. In @cite_9 , they analyze the user capacity , which is defined as the maximum number of users that can be admitted to the system by allocating spreading code sequences and transmission powers optimally without violating minimum @math requirements, of CDMA systems. In this work, we focus on achieveable sum-rates rather than on user capacity.
|
{
"cite_N": [
"@cite_9",
"@cite_22",
"@cite_12"
],
"mid": [
"2134233838",
"2123074269",
"2003010976"
],
"abstract": [
"There has been intense effort in the past decade to develop multiuser receiver structures which mitigate interference between users in spread-spectrum systems. While much of this research is performed at the physical layer, the appropriate power control and choice of signature sequences in conjunction with multiuser receivers and the resulting network user capacity is not well understood. In this paper we will focus on a single cell and consider both the uplink and downlink scenarios and assume a synchronous CDMA (S-CDMA) system. We characterize the user capacity of a single cell with the optimal linear receiver (MMSE receiver). The user capacity of the system is the maximum number of users per unit processing gain admissible in the system such that each user has its quality-of-service (QoS) requirement (expressed in terms of its desired signal-to-interference ratio) met. This characterization allows one to describe the user capacity through a simple effective bandwidth characterization: users are allowed in the system if and only if the sum of their effective bandwidths is less than the processing gain of the system. The effective bandwidth of each user is a simple monotonic function of its QoS requirement. We identify the optimal signature sequences and power control strategies so that the users meet their QoS requirement. The optimality is in the sense of minimizing the sum of allocated powers. It turns out that with this optimal allocation of signature sequences and powers, the linear MMSE receiver is just the corresponding matched filter for each user. We also characterize the effect of transmit power constraints on the user capacity.",
"We consider direct sequence code division multiple access (DS-CDMA), modeling interference from users communicating with neighboring base stations by additive colored noise. We consider two types of receiver structures: first we consider the information-theoretically optimal receiver and use the sum capacity of the channel as our performance measure. Second, we consider the linear minimum mean square error (LMMSE) receiver and use the signal-to-interference ratio (SIR) of the estimate of the symbol transmitted as our performance measure. Our main result is a constructive characterization of the possible performance in both these scenarios. A central contribution of this characterization is the derivation of a qualitative feature of the optimal performance measure in both the scenarios studied. We show that the sum capacity is a saddle function: it is convex in the additive noise covariances and concave in the user received powers. In the linear receiver case, we show that the mini average power required to meet a set of target performance requirements of the users is a saddle function: it is convex in the additive noise covariances and concave in the set of performance requirements.",
"The sum capacity of a multiuser synchronous CDMA system is completely characterized in the general case of asymmetric user power constraints-this solves the open problem posed by Rupf and Massey (see ibid., vol.40, p.1261-6, 1994) which had solved the equal power constraint case. We identify the signature sequences with real components that achieve sum capacity and indicate a simple recursive algorithm to construct them."
]
}
|
1105.5861
|
2951074389
|
This paper considers the optimum single cell power-control maximizing the aggregate (uplink) communication rate of the cell when there are peak power constraints at mobile users, and a low-complexity data decoder (without successive decoding) at the base station. It is shown, via the theory of majorization, that the optimum power allocation is binary, which means links are either "on" or "off". By exploiting further structure of the optimum binary power allocation, a simple polynomial-time algorithm for finding the optimum transmission power allocation is proposed, together with a reduced complexity near-optimal heuristic algorithm. Sufficient conditions under which channel-state aware time-division-multiple-access (TDMA) maximizes the aggregate communication rate are established. Finally, a numerical study is performed to compare and contrast the performance achieved by the optimum binary power-control policy with other sub-optimum policies and the throughput capacity achievable via successive decoding. It is observed that two dominant modes of communication arise, wideband or TDMA, and that successive decoding achieves better sum-rates only under near-perfect interference cancellation efficiency.
|
Our results are different from the corresponding classic results in @cite_23 . In @cite_23 , the maximum Shannon-theoretic sum-rate is considered, whereas in the present paper, we treat interference as pure Gaussian noise. Although our assumption simplifies the receiver, it complicates the power optimization problem. We note that the capacity region of the Gaussian multiple-access channel is well understood, and it is known that all points of the boundary of the rate region can be achieved by successive decoding @cite_14 . The optimal power-control for the Fading Gaussian multiple-access channel with channel state information at the transmitters is also well understood @cite_16 . In the present paper, we arrive at the problem from a different angle, where our interest is in understanding the structure of power-control problems in which interference is treated as Gaussian noise (very relevant for general interference networks), which excludes successive decoding or other multi-user decoding techniques.
|
{
"cite_N": [
"@cite_14",
"@cite_23",
"@cite_16"
],
"mid": [
"2006428380",
"2158116199",
"2123097741"
],
"abstract": [
"It is shown that any point in the capacity region of a Gaussian multiple-access channel is achievable by single-user coding without requiring synchronization among users, provided that each user \"splits\" data and signal into two parts. Based on this result, a new multiple-access technique called rate-splitting multiple accessing (RSMA) is proposed. RSMA is a code-division multiple-access scheme for the M-user Gaussian multiple-access channel for which the effort of finding the codes for the M users, of encoding, and of decoding is that of at most 2M-1 independent point-to-point Gaussian channels. The effects of bursty sources, multipath fading, and inter-cell interference are discussed and directions for further research are indicated.",
"We consider a power control scheme for maximizing the information capacity of the uplink in single-cell multiuser communications with frequency-flat fading, under the assumption that the users attenuations are measured perfectly. Its main characteristics are that only one user transmits over the entire bandwidth at any particular time instant and that the users are allocated more power when their channels are good, and less when they are bad. Moreover, these features are independent of the statistics of the fading. Numerical results are presented for the case of single-path Rayleigh fading. We show that an increase in capacity over a perfectly-power controlled (Gaussian) channel can be achieved, especially if the number of users is large. By examining the bit error-rate with antipodal signalling, we show the inherent diversity in multiuser communications over fading channels.",
"In multiaccess wireless systems, dynamic allocation of resources such as transmit power, bandwidths, and rates is an important means to deal with the time-varying nature of the environment. We consider the problem of optimal resource allocation from an information-theoretic point of view. We focus on the multiaccess fading channel with Gaussian noise, and define two notions of capacity depending on whether the traffic is delay-sensitive or not. We characterize the throughput capacity region which contains the long-term achievable rates through the time-varying channel. We show that each point on the boundary of the region can be achieved by successive decoding. Moreover, the optimal rate and power allocations in each fading state can be explicitly obtained in a greedy manner. The solution can be viewed as the generalization of the water-filling construction for single-user channels to multiaccess channels with arbitrary number of users, and exploits the underlying polymatroid structure of the capacity region."
]
}
|
1105.5832
|
2096485625
|
This short paper gives an introduction to a research project to analyze how digital documents are structured and described. Using a phenomenological approach, this research will reveal common patterns that are used in data, independent from the particular technology in which the data is available. The ability to identify these patterns, on different levels of description, is important for several applications in digital libraries. A better understanding of data structuring will not only help to better capture singular characteristics of data by metadata, but will also recover intended structures of digital objects, beyond long term preservation.
|
Patterns as structured methods of describing good design practice, were first introduced by in the field of architecture @cite_9 . In their words each pattern describes a problem which occurs over and over again in our environment, and then describes the core of the solution to that problem.'' Patterns were later adopted in other fields of engineering, especially in (object-oriented) software design @cite_4 @cite_13 . There are some works that describe pattern for specific data structuring or data modeling languages, among them Linked Data in RDF @cite_0 , Markup languages @cite_5 , data models in enterprises @cite_17 @cite_7 , and meta models @cite_1 @cite_20 . A general limitation of existing approaches is the focus to one specific formalization method. This practical limitation blocks the view to more general data patterns, independent from a particular encoding, and it conceals blind spots and weaknesses of a chosen formalism.
|
{
"cite_N": [
"@cite_4",
"@cite_7",
"@cite_9",
"@cite_1",
"@cite_0",
"@cite_5",
"@cite_13",
"@cite_20",
"@cite_17"
],
"mid": [
"36835278",
"173578282",
"1968983388",
"1585249014",
"",
"1752569088",
"1649645444",
"1480606017",
""
],
"abstract": [
"",
"A quick and reliable way to build proven databases for core business functions Industry experts raved about The Data Model Resource Book when it was first published in March 1997 because it provided a simple, cost-effective way to design databases for core business functions. Len Silverston has now revised and updated the hugely successful First Edition, while adding a companion volume to take care of more specific requirements of different businesses. Each volume is accompanied by a CD-ROM, which is sold separately. Each CD-ROM provides powerful design templates discussed in the books in a ready-to-use electronic format, allowing companies and individuals to develop the databases they need at a fraction of the cost and a third of the time it would take to build them from scratch. Updating the data models from the First Edition CD-ROM, this resource allows database developers to quickly load a core set of data models and customize them to support a wide range of business functions.",
"You can use this book to design a house for yourself with your family; you can use it to work with your neighbors to improve your town and neighborhood; you can use it to design an office, or a workshop, or a public building. And you can use it to guide you in the actual process of construction. After a ten-year silence, Christopher Alexander and his colleagues at the Center for Environmental Structure are now publishing a major statement in the form of three books which will, in their words, \"lay the basis for an entirely new approach to architecture, building and planning, which will we hope replace existing ideas and practices entirely.\" The three books are The Timeless Way of Building, The Oregon Experiment, and this book, A Pattern Language. At the core of these books is the idea that people should design for themselves their own houses, streets, and communities. This idea may be radical (it implies a radical transformation of the architectural profession) but it comes simply from the observation that most of the wonderful places of the world were not made by architects but by the people. At the core of the books, too, is the point that in designing their environments people always rely on certain \"languages,\" which, like the languages we speak, allow them to articulate and communicate an infinite variety of designs within a forma system which gives them coherence. This book provides a language of this kind. It will enable a person to make a design for almost any kind of building, or any part of the built environment. \"Patterns,\" the units of this language, are answers to design problems (How high should a window sill be? How many stories should a building have? How much space in a neighborhood should be devoted to grass and trees?). More than 250 of the patterns in this pattern language are given: each consists of a problem statement, a discussion of the problem with an illustration, and a solution. As the authors say in their introduction, many of the patterns are archetypal, so deeply rooted in the nature of things that it seemly likely that they will be a part of human nature, and human action, as much in five hundred years as they are today.",
"Chapter 1: About Metadata Models Chapter 2: Data Chapter 3: Activities, Functions, and Processes Chapter 4: Locations Chapter 5: People and Organizations Chapter 6: Events and Timing Chapter 7: Motivation Glossary References and Further Reading Index About the Author",
"",
"Combining expressiveness and plainness in the design of web documents is a difficult task. Validation languages are very powerful and designers are tempted to over-design specifications. This paper discusses an offbeat approach: describing any structured content of any document by only using a very small set of patterns, regardless of the format and layout of that document. The paper sketches out a formal analysis of some patterns, based on grammars and language theory. The study has been performed on XML languages and DTDs and has a twofold goal: coding empirical patterns in a formal representation, and discussing their completeness.",
"The book is an introduction to the idea of design patterns in software engineering, and a catalog of twenty-three common patterns. The nice thing is, most experienced OOP designers will find out they've known about patterns all along. It's just that they've never considered them as such, or tried to centralize the idea behind a given pattern so that it will be easily reusable.",
"",
""
]
}
|
1105.5344
|
2964039195
|
Considering a clique as a conservative definition of community structure, we examine how graph partitioning algorithms interact with cliques. Many popular community-finding algorithms partition the entire graph into non-overlapping communities. We show that on a wide range of empirical networks, from different domains, significant numbers of cliques are split across the separate partitions produced by these algorithms. We then examine the largest connected component of the subgraph formed by retaining only edges in cliques, and apply partitioning strategies that explicitly minimise the number of cliques split. We further examine several modern overlapping community finding algorithms, in terms of the interaction between cliques and the communities they find, and in terms of the global overlap of the sets of communities they find. We conclude that, due to the connectedness of many networks, any community finding algorithm that produces partitions must fail to find at least some significant structures. Moreover, contrary to traditional intuition, in some empirical networks, strong ties and cliques frequently do cross community boundaries; much community structure is fundamentally overlapping and unpartitionable in nature.
|
Here, Granovetter is using clique' in the sociological sense, closer to the modern idea of community, and the idea is that bridges -- narrow connecting links -- need to be crossed to carry information between such cliques. This idea is further summed up in the modern review of Fortunato @cite_0 as: However, this work, in keeping with research @cite_30 @cite_28 on a limited number of other networks, finds evidence that structurally weak ties need not be crossed to traverse the network, contrary to the intuition just described. In fact, we show that while the traditional intuition may be appropriate in some cases, the structure of many empirical networks does indeed lead to cliques crossing the bottleneck' formed by inter-community edges.
|
{
"cite_N": [
"@cite_0",
"@cite_28",
"@cite_30"
],
"mid": [
"2127048411",
"2141015754",
"2136852793"
],
"abstract": [
"The modern science of networks has brought significant advances to our understanding of complex systems. One of the most relevant features of graphs representing real systems is community structure, or clustering, i.e. the organization of vertices in clusters, with many edges joining vertices of the same cluster and comparatively few edges joining vertices of different clusters. Such clusters, or communities, can be considered as fairly independent compartments of a graph, playing a similar role like, e.g., the tissues or the organs in the human body. Detecting communities is of great importance in sociology, biology and computer science, disciplines where systems are often represented as graphs. This problem is very hard and not yet satisfactorily solved, despite the huge effort of a large interdisciplinary community of scientists working on it over the past few years. We will attempt a thorough exposition of the topic, from the definition of the main elements of the problem, to the presentation of most methods developed, with a special focus on techniques designed by statistical physicists, from the discussion of crucial issues like the significance of clustering and how methods should be tested and compared against each other, to the description of applications to real networks.",
"This paper examines the celebrated \"Strength of weak ties\" theory of Granovetter(1973). We formalize the theory in terms of two hypotheses: one, for any threeplayers with two links present, the probability of a third link being present isincreasing in the strength of the two ties, and two, the removal of a weak tieincreases average distance in the network more than the removal of a strong tie. We test these hypotheses using data on the network of coauthorship amongeconomists. We find support for the hypothesis of transitivity of strong ties, but we reject thehypothesis that weak ties reduce distance more than strong ties do. We then identify two general features of networks which explain these findings:significant inequality in the distribution of connections across individuals andstronger ties among individuals who have more connections.",
"Social networks transmitting covert or sensitive information cannot use all ties for this purpose. Rather, they can only use a subset of ties that are strong enough to be “trusted”. This paper addresses whether it is still possible, under this restriction, for information to be transmitted widely and rapidly in social networks. We use transitivity as evidence of strong ties, requiring one or more shared contacts in order to count an edge as strong. We examine the effect of removing all non-transitive ties in two real social network data sets, imposing varying thresholds in the number of shared contacts. We observe that transitive ties occupy a large portion of the network and that removing all other ties, while causing some individuals to become disconnected, preserves the majority of the giant connected component. Furthermore, the average shortest path, important for the rapid diffusion of information, increases only slightly relative to the original network. We also evaluate the cost of forming transitive ties by modeling a random graph composed entirely of closed triads and comparing its connectivity and average shortest path with the equivalent Erdos–Renyi random graph. Both the empirical study and random model point to a robustness of strong ties with respect to the connectivity and small world property of social networks."
]
}
|
1105.3716
|
1676866384
|
We consider the problem of detecting clones in wireless mobile adhoc networks. We assume that one of the devices of the network has been cloned. Everything, including certificates and secret keys. This can happen quite easily, because of a virus that immediately after sending all the content of the infected device to the adversary destroys itself, or just because the owner has left his device unattended for a few minutes in a hostile environment. The problem is to detect this attack. We propose a solution in networks of mobile devices carried by individuals. These networks are composed by nodes that have the capability of using short-range communication technology like blue-tooth or Wi-Fi, where nodes are carried by mobile users, and where links appear and disappear according to the social relationships between the users. Our idea is to use social physical contacts, securely collected by wireless personal smart-phones, as a biometric way to authenticate the legitimate owner of the device and detect the clone attack. We introduce two mechanisms: Personal Marks and Community Certificates. Personal Marks is a simple cryptographic protocol that works very well when the adversary is an insider, a malicious node in the network that is part, or not very far, from the social community of the original device that has been cloned. Community Certificates work very well when the adversary is an outsider, a node that has the goal of using the stolen credentials when interacting with other nodes that are far in the social network from the original device. When combined, these mechanisms provide an excellent protection against this very strong attack. We prove our ideas and solutions with extensive simulations in a real world scenario-with mobility traces collected in a real life experiment
|
Although these distributed techniques do not present single points of failure, the overhead they incur either in message traffic or in computational terms @cite_12 is far from negligible. Moreover, all the aforementioned techniques, by relying on fixed geographical position of the nodes in the network are not apt to be used in mobile scenarios such the one we consider @cite_8 @cite_37 .
|
{
"cite_N": [
"@cite_37",
"@cite_12",
"@cite_8"
],
"mid": [
"",
"2161864928",
"2112701160"
],
"abstract": [
"",
"Sensor nodes that are deployed in hostile environments are vulnerable to capture and compromise. An adversary may obtain private information from these sensors, clone and intelligently deploy them in the network to launch a variety of insider attacks. This attack process is broadly termed as a clone attack. Currently, the defenses against clone attacks are not only very few, but also suffer from selective interruption of detection and high overhead (computation and memory). In this paper, we propose a new effective and efficient scheme, called SET, to detect such clone attacks. The key idea of SET is to detect clones by computing set operations (intersection and union) of exclusive subsets in the network. First, SET securely forms exclusive unit subsets among one-hop neighbors in the network in a distributed way. This secure subset formation also provides the authentication of nodes’ subset membership. SET then employs a tree structure to compute non-overlapped set operations and integrates interleaved authentication to prevent unauthorized falsification of subset information during forwarding. Randomization is used to further make the exclusive subset and tree formation unpredictable to an adversary. We show the reliability and resilience of SET by analyzing the probability that an adversary may effectively obstruct the set operations. Performance analysis and simulations also demonstrate that the proposed scheme is more efficient than existing schemes from both communication and memory cost standpoints.",
"Pocket Switched Networks (PSN) make use of both human mobility and local global connectivity in order to transfer data between mobile users' devices. This falls under the Delay Tolerant Networking (DTN) space, focusing on the use of opportunistic networking. One key problem in PSN is in designing forwarding algorithms which cope with human mobility patterns. We present an experiment measuring forty-one humans' mobility at the Infocom 2005 conference. The results of this experiment are similar to our previous experiments in corporate and academic working environments, in exhibiting a power-law distrbution for the time between node contacts. We then discuss the implications of these results on the design of forwarding algorithms for PSN."
]
}
|
1105.3716
|
1676866384
|
We consider the problem of detecting clones in wireless mobile adhoc networks. We assume that one of the devices of the network has been cloned. Everything, including certificates and secret keys. This can happen quite easily, because of a virus that immediately after sending all the content of the infected device to the adversary destroys itself, or just because the owner has left his device unattended for a few minutes in a hostile environment. The problem is to detect this attack. We propose a solution in networks of mobile devices carried by individuals. These networks are composed by nodes that have the capability of using short-range communication technology like blue-tooth or Wi-Fi, where nodes are carried by mobile users, and where links appear and disappear according to the social relationships between the users. Our idea is to use social physical contacts, securely collected by wireless personal smart-phones, as a biometric way to authenticate the legitimate owner of the device and detect the clone attack. We introduce two mechanisms: Personal Marks and Community Certificates. Personal Marks is a simple cryptographic protocol that works very well when the adversary is an insider, a malicious node in the network that is part, or not very far, from the social community of the original device that has been cloned. Community Certificates work very well when the adversary is an outsider, a node that has the goal of using the stolen credentials when interacting with other nodes that are far in the social network from the original device. When combined, these mechanisms provide an excellent protection against this very strong attack. We prove our ideas and solutions with extensive simulations in a real world scenario-with mobility traces collected in a real life experiment
|
The idea of exploiting information regarding social ties between nodes is not new, actually it is common to a good part of the literature on pocket switched networks (PSN) and similar social networks. Much research has been dedicated to the analysis of the data collected during real-life experiments, to compute statistical properties of human mobility, and to uncover its structure in sub-communities @cite_16 @cite_18 @cite_37 @cite_11 @cite_30 @cite_33 @cite_36 . Later on, most of the work in the field focused on message forwarding and to find the best strategy to relay messages in order to route them to destination as fast as possible (see @cite_31 @cite_34 @cite_14 , among many others). Also security problematics such as node capture @cite_1 or selfishness @cite_5 @cite_24 have been solved by making use of social relationships among nodes.
|
{
"cite_N": [
"@cite_30",
"@cite_18",
"@cite_37",
"@cite_14",
"@cite_33",
"@cite_36",
"@cite_1",
"@cite_34",
"@cite_24",
"@cite_5",
"@cite_31",
"@cite_16",
"@cite_11"
],
"mid": [
"",
"",
"",
"2145517691",
"",
"",
"2154489778",
"2135712710",
"",
"",
"2082674813",
"2103344667",
""
],
"abstract": [
"",
"",
"",
"We examine the fundamental properties that determine the basic performance metrics for opportunistic communications. We first consider the distribution of inter-contact times between mobile devices. Using a diverse set of measured mobility traces, we find as an invariant property that there is a characteristic time, order of half a day, beyond which the distribution decays exponentially. Up to this value, the distribution in many cases follows a power law, as shown in recent work. This powerlaw finding was previously used to support the hypothesis that inter-contact time has a power law tail, and that common mobility models are not adequate. However, we observe that the time scale of interest for opportunistic forwarding may be of the same order as the characteristic time, and thus the exponential tail is important. We further show that already simple models such as random walk and random way point can exhibit the same dichotomy in the distribution of inter-contact time ascin empirical traces. Finally, we perform an extensive analysis of several properties of human mobility patterns across several dimensions, and we present empirical evidence that the return time of a mobile device to its favorite location site may already explain the observed dichotomy. Our findings suggest that existing results on the performance of forwarding schemes basedon power-law tails might be overly pessimistic.",
"",
"",
"Mobile Ad Hoc networks, due to the unattended nature of the network itself and the dispersed location of nodes, are subject to several unique security issues. One of the most vexed security threat is node capture. A few solutions have already been proposed to address this problem; however, those solutions are either centralized or focused on theoretical mobility models alone. In the former case the solution does not fit well the distributed nature of the network while, in the latter case, the quality of the solutions obtained for realistic mobility models severely differs from the results obtained for theoretical models. The rationale of this paper is inspired by the observation that re-encounters of mobile nodes do elicit a form of social ties. Leveraging these ties, it is possible to design efficient and distributed algorithms that, with a moderated degree of node cooperation, enforce the emergent property of node capture detection. In particular, in this paper we provide a proof of concept proposing a set of algorithms that leverage, to different extent, node mobility and node cooperation - that is, identifying social ties - to thwart node capture attack. In particular, we test these algorithms on a realistic mobility scenario. Extensive simulations show the quality of the proposed solutions and, more important, the viability of the proposed approach.",
"In this paper we seek to improve our understanding of human mobility in terms of social structures, and to use these structures in the design of forwarding algorithms for Pocket Switched Networks (PSNs). Taking human mobility traces from the real world, we discover that human interaction is heterogeneous both in terms of hubs (popular individuals) and groups or communities. We propose a social based forwarding algorithm, BUBBLE, which is shown empirically to improve the forwarding efficiency significantly compared to oblivious forwarding schemes and to PROPHET algorithm. We also show how this algorithm can be implemented in a distributed way, which demonstrates that it is applicable in the decentralised environment of PSNs.",
"",
"",
"Message delivery in sparse Mobile Ad hoc Networks (MANETs) is difficult due to the fact that the network graph is rarely (if ever) connected. A key challenge is to find a route that can provide good delivery performance and low end-to-end delay in a disconnected network graph where nodes may move freely. This paper presents a multidisciplinary solution based on the consideration of the so-called small world dynamics which have been proposed for economy and social studies and have recently revealed to be a successful approach to be exploited for characterising information propagation in wireless networks. To this purpose, some bridge nodes are identified based on their centrality characteristics, i.e., on their capability to broker information exchange among otherwise disconnected nodes. Due to the complexity of the centrality metrics in populated networks the concept of ego networks is exploited where nodes are not required to exchange information about the entire network topology, but only locally available information is considered. Then SimBet Routing is proposed which exploits the exchange of pre-estimated \"betweenness' centrality metrics and locally determined social \"similarity' to the destination node. We present simulations using real trace data to demonstrate that SimBet Routing results in delivery performance close to Epidemic Routing but with significantly reduced overhead. Additionally, we show that SimBet Routing outperforms PRoPHET Routing, particularly when the sending and receiving nodes have low connectivity.",
"As mobile devices become increasingly pervasive and commonly equipped with short-range radio capabilities, we observe that it might be possible to build a network based only on pair-wise contact of users. By using user mobility as a network transport mechanism, devices can intelligently route latency-insensitive packets using power-efficient short-range radio. Such a network could provide communication capability where no network infrastructure exists, or extend the reach of established infrastructure. To collect user mobility data, we ran two user studies by giving instrumented PDA devices to groups of students to carry for several weeks. We evaluate our work by providing empirical data that suggests that it is possible to make intelligent routing decisions based on only pair-wise contact, without previous knowledge of the mobility model or location information.",
""
]
}
|
1105.3716
|
1676866384
|
We consider the problem of detecting clones in wireless mobile adhoc networks. We assume that one of the devices of the network has been cloned. Everything, including certificates and secret keys. This can happen quite easily, because of a virus that immediately after sending all the content of the infected device to the adversary destroys itself, or just because the owner has left his device unattended for a few minutes in a hostile environment. The problem is to detect this attack. We propose a solution in networks of mobile devices carried by individuals. These networks are composed by nodes that have the capability of using short-range communication technology like blue-tooth or Wi-Fi, where nodes are carried by mobile users, and where links appear and disappear according to the social relationships between the users. Our idea is to use social physical contacts, securely collected by wireless personal smart-phones, as a biometric way to authenticate the legitimate owner of the device and detect the clone attack. We introduce two mechanisms: Personal Marks and Community Certificates. Personal Marks is a simple cryptographic protocol that works very well when the adversary is an insider, a malicious node in the network that is part, or not very far, from the social community of the original device that has been cloned. Community Certificates work very well when the adversary is an outsider, a node that has the goal of using the stolen credentials when interacting with other nodes that are far in the social network from the original device. When combined, these mechanisms provide an excellent protection against this very strong attack. We prove our ideas and solutions with extensive simulations in a real world scenario-with mobility traces collected in a real life experiment
|
The authors in @cite_26 propose two intrusion detection systems that have similar biometrics ideas: The first one is build upon Radio Frequency Fingerprinting (RFF), whereas the second one leverages User Mobility Profiles (UMP). However both detection systems are centralized, and rely on the fact that the intruder (the clone) behaves substantially differently from the real user in terms of geographical movements. Thus, compared with the solutions proposed in this paper, both systems are based on a completely different idea and are not able to detect anomalies when the clone behaves similarly to the original node (for example, when the clone attack happens in a building). Lastly, the solutions are not distributed.
|
{
"cite_N": [
"@cite_26"
],
"mid": [
"1480870385"
],
"abstract": [
"Impersonation attacks in wireless and mobile networks by professional criminal groups are becoming more sophisticated. We confirm with simple risk analysis that impersonation attacks offer attractive incentives to malicious criminals and should therefore be given highest priority in research studies. We also survey our recent investigations on Radio Frequency Fingerprinting and User Mobility Profiles and discuss details of our methodologies for building enhanced intrusion detection systems for future wireless and mobile networks."
]
}
|
1105.3144
|
2076284780
|
A key issue in physical-layer network coding (PNC) is how to deal with the asynchrony between signals transmitted by multiple transmitters. That is, symbols transmitted by different transmitters could arrive at the receiver with symbol misalignment as well as relative carrier-phase offset. A second important issue is how to integrate channel coding with PNC to achieve reliable communication. This paper investigates these two issues and makes the following contributions: 1) We propose and investigate a general framework for decoding at the receiver based on belief propagation (BP). The framework can effectively deal with symbol and phase asynchronies while incorporating channel coding at the same time. 2) For unchannel-coded PNC, we show that for BPSK and QPSK modulations, our BP method can significantly reduce the asynchrony penalties compared with prior methods. 3) For QPSK unchannel-coded PNC, with a half symbol offset between the transmitters, our BP method can drastically reduce the performance penalty due to phase asynchrony, from more than 6 dB to no more than 1 dB. 4) For channel-coded PNC, with our BP method, both symbol and phase asynchronies actually improve the system performance compared with the perfectly synchronous case. Furthermore, the performance spread due to different combinations of symbol and phase offsets between the transmitters in channel-coded PNC is only around 1 dB. The implication of 3) is that if we could control the symbol arrival times at the receiver, it would be advantageous to deliberately introduce a half symbol offset in unchannel-coded PNC. The implication of 4) is that when channel coding is used, symbol and phase asynchronies are not major performance concerns in PNC.
|
Although asynchronous unchannel-coded PNC has been studied previously, only suboptimal decoding algorithms were considered. Refs. @cite_29 and @cite_15 argued that the largest BER performance penalty is 3 dB for BPSK modulation (for both phase and symbol asynchronies). However, this conclusion is based on suboptimal decoding.
|
{
"cite_N": [
"@cite_29",
"@cite_15"
],
"mid": [
"1975099099",
"2532711252"
],
"abstract": [
"A main distinguishing feature of a wireless network compared with a wired network is its broadcast nature, in which the signal transmitted by a node may reach several other nodes, and a node may receive signals from several other nodes simultaneously. Rather than a blessing, this feature is treated more as an interference-inducing nuisance in most wireless networks today (e.g., IEEE 802.11). The goal of this paper is to show how the concept of network coding can be applied at the physical layer to turn the broadcast property into a capacity-boosting advantage in wireless ad hoc networks. Specifically, we propose a physical-layer network coding (PNC) scheme to coordinate transmissions among nodes. In contrast to \"straightforward\" network coding which performs coding arithmetic on digital bit streams after they have been received, PNC makes use of the additive nature of simultaneously arriving electromagnetic (EM) waves for equivalent coding operation. PNC can yield higher capacity than straight-forward network coding when applied to wireless networks. We believe this is a first paper that ventures into EM-wave-based network coding at the physical layer and demonstrates its potential for boosting network capacity. PNC opens up a whole new research area because of its implications and new design requirements for the physical, MAC, and network layers of ad hoc wireless stations. The resolution of the many outstanding but interesting issues in PNC may lead to a revolutionary new paradigm for wireless ad hoc networking.",
"When data are transmitted in a wireless network, they reach the target receiver as well as other receivers in the neighborhood. Rather than a blessing, this attribute is treated as an interference-inducing nuisance in most wireless networks today (e.g., IEEE 802.11). Physical-layer network coding (PNC), however, has been proposed to take advantage of this attribute. Unlike \"conventional\" network coding which performs coding arithmetic on digital bit streams after they are decoded, PNC makes use of the additive nature of simultaneously arriving electromagnetic (EM) waves and applies the network coding arithmetic at the physical layer. As a result, the destructive effect of interference is eliminated and the capacity of networks is boosted significantly. A key requirement of PNC is synchronization among nodes, which has not been addressed previously. This is the focus of this paper. Specifically, we investigate the impact of imperfect synchronization (i.e., finite synchronization errors) on PNC. We show that with BPSK modulation, PNC still yields significantly higher capacity than straightforward network coding when there are synchronization errors. And interestingly, this remains to be so even in the extreme case where synchronization is not performed at all."
]
}
|
1105.3144
|
2076284780
|
A key issue in physical-layer network coding (PNC) is how to deal with the asynchrony between signals transmitted by multiple transmitters. That is, symbols transmitted by different transmitters could arrive at the receiver with symbol misalignment as well as relative carrier-phase offset. A second important issue is how to integrate channel coding with PNC to achieve reliable communication. This paper investigates these two issues and makes the following contributions: 1) We propose and investigate a general framework for decoding at the receiver based on belief propagation (BP). The framework can effectively deal with symbol and phase asynchronies while incorporating channel coding at the same time. 2) For unchannel-coded PNC, we show that for BPSK and QPSK modulations, our BP method can significantly reduce the asynchrony penalties compared with prior methods. 3) For QPSK unchannel-coded PNC, with a half symbol offset between the transmitters, our BP method can drastically reduce the performance penalty due to phase asynchrony, from more than 6 dB to no more than 1 dB. 4) For channel-coded PNC, with our BP method, both symbol and phase asynchronies actually improve the system performance compared with the perfectly synchronous case. Furthermore, the performance spread due to different combinations of symbol and phase offsets between the transmitters in channel-coded PNC is only around 1 dB. The implication of 3) is that if we could control the symbol arrival times at the receiver, it would be advantageous to deliberately introduce a half symbol offset in unchannel-coded PNC. The implication of 4) is that when channel coding is used, symbol and phase asynchronies are not major performance concerns in PNC.
|
Ref. @cite_4 mentioned without proof that there is a maximum 6 dB BER performance penalty for QPSK modulation when @math and @math . To the best of our knowledge, no quantitative results and concrete explanation have been given for the general @math and @math case.
|
{
"cite_N": [
"@cite_4"
],
"mid": [
"1996646329"
],
"abstract": [
"Network coding, where relay nodes combine the information received from multiple links rather than simply replicating and forwarding the received packets, has shown the promise of significantly improving system performance. In very recent works, multiple researchers have presented methods for increasing system throughput by employing network coding inspired methods to mix packets at the physical layer: physical-layer network coding (PNC). A common example used to validate much of this work is that of two sources exchanging information through a single intervening relay - a situation that we denote the \"exchange channel\". In this paper, achievable rates of various schemes on the exchange channel are considered. Achievable rates for traditional multi-hop routing approaches, network coding approaches, and various PNC approaches are considered. A new method of PNC inspired by Tomlinson-Harashima precoding (THP), where a modulo operation is used to control the power at the relay, is introduced, and shown to have a slight advantage over analogous schemes at high signal-to-noise ratios (SNRs)."
]
}
|
1105.3144
|
2076284780
|
A key issue in physical-layer network coding (PNC) is how to deal with the asynchrony between signals transmitted by multiple transmitters. That is, symbols transmitted by different transmitters could arrive at the receiver with symbol misalignment as well as relative carrier-phase offset. A second important issue is how to integrate channel coding with PNC to achieve reliable communication. This paper investigates these two issues and makes the following contributions: 1) We propose and investigate a general framework for decoding at the receiver based on belief propagation (BP). The framework can effectively deal with symbol and phase asynchronies while incorporating channel coding at the same time. 2) For unchannel-coded PNC, we show that for BPSK and QPSK modulations, our BP method can significantly reduce the asynchrony penalties compared with prior methods. 3) For QPSK unchannel-coded PNC, with a half symbol offset between the transmitters, our BP method can drastically reduce the performance penalty due to phase asynchrony, from more than 6 dB to no more than 1 dB. 4) For channel-coded PNC, with our BP method, both symbol and phase asynchronies actually improve the system performance compared with the perfectly synchronous case. Furthermore, the performance spread due to different combinations of symbol and phase offsets between the transmitters in channel-coded PNC is only around 1 dB. The implication of 3) is that if we could control the symbol arrival times at the receiver, it would be advantageous to deliberately introduce a half symbol offset in unchannel-coded PNC. The implication of 4) is that when channel coding is used, symbol and phase asynchronies are not major performance concerns in PNC.
|
Ref. @cite_24 investigated systems in which symbols are aligned but phases are not. It uses QPSK for uplinks, but a higher order constellation map (e.g., 5QAM) for downlinks when the uplink phase offset is not favorable for QPSK downlink. That is, it varies the mode of PNC mapping depending on the phase asynchrony. In this paper, we assume the simpler system in which both the uplink and downlink use the same modulation, either BPSK or QPSK.
|
{
"cite_N": [
"@cite_24"
],
"mid": [
"2127117141"
],
"abstract": [
"We investigate modulation schemes optimized for two-way wireless relaying systems, for which network coding is employed at the physical layer. We consider network coding based on denoise-and-forward (DNF) protocol, which consists of two stages: multiple access (MA) stage, where two terminals transmit simultaneously towards a relay, and broadcast (BC) stage, where the relay transmits towards the both terminals. We introduce a design principle of modulation and network coding, considering the superposed constellations during the MA stage. For the case of QPSK modulations at the MA stage, we show that QPSK constellations with an exclusive-or (XOR) network coding do not always offer the best transmission for the BC stage, and that there are several channel conditions in which unconventional 5-ary constellations lead to a better throughput performance. Through the use of sphere packing, we optimize the constellation for such an irregular network coding. We further discuss the design issue of the modulation in the case when the relay exploits diversity receptions such as multiple-antenna diversity and path diversity in frequency-selective fading. In addition, we apply our design strategy to a relaying system using higher-level modulations of 16QAM in the MA stage. Performance evaluations confirm that the proposed scheme can significantly improve end-to-end throughput for two-way relaying systems."
]
}
|
1105.3144
|
2076284780
|
A key issue in physical-layer network coding (PNC) is how to deal with the asynchrony between signals transmitted by multiple transmitters. That is, symbols transmitted by different transmitters could arrive at the receiver with symbol misalignment as well as relative carrier-phase offset. A second important issue is how to integrate channel coding with PNC to achieve reliable communication. This paper investigates these two issues and makes the following contributions: 1) We propose and investigate a general framework for decoding at the receiver based on belief propagation (BP). The framework can effectively deal with symbol and phase asynchronies while incorporating channel coding at the same time. 2) For unchannel-coded PNC, we show that for BPSK and QPSK modulations, our BP method can significantly reduce the asynchrony penalties compared with prior methods. 3) For QPSK unchannel-coded PNC, with a half symbol offset between the transmitters, our BP method can drastically reduce the performance penalty due to phase asynchrony, from more than 6 dB to no more than 1 dB. 4) For channel-coded PNC, with our BP method, both symbol and phase asynchronies actually improve the system performance compared with the perfectly synchronous case. Furthermore, the performance spread due to different combinations of symbol and phase offsets between the transmitters in channel-coded PNC is only around 1 dB. The implication of 3) is that if we could control the symbol arrival times at the receiver, it would be advantageous to deliberately introduce a half symbol offset in unchannel-coded PNC. The implication of 4) is that when channel coding is used, symbol and phase asynchronies are not major performance concerns in PNC.
|
For channel-coded PNC, an important issue is how to integrate the channel decoding operation and the network coding operation at the relay. Ref. ShengliJSAC09 @cite_18 presented a scheme that works well for synchronous channel-coded PNC. The scheme is not amenable to extension for asynchronous channel-coded PNC.
|
{
"cite_N": [
"@cite_18"
],
"mid": [
"2119676644"
],
"abstract": [
"This paper investigates link-by-link channel-coded PNC (physical layer network coding), in which a critical process at the relay is to transform the superimposed channel-coded packets received from the two end nodes (plus noise), Y3 = X1+ X2+W3, to the network-coded combination of the source packets, S1 oplus S2. This is in contrast to the traditional multiple-access problem, in which the goal is to obtain both S1 and S2 explicitly at the relay node. Trying to obtain S1 and S2 explicitly is an overkill if we are only interested in S1oplusS2. In this paper, we refer to the transformation Y3 rarr S1 oplus S2 as the channel-decoding- network-coding process (CNC) in that it involves both channel decoding and network coding operations. This paper shows that if we adopt the repeat accumulate (RA) channel code at the two end nodes, then there is a compatible decoder at the relay that can perform the transformation Y3 rarr S1oplusS2 efficiently. Specifically, we redesign the belief propagation decoding algorithm of the RA code for traditional point-to-point channel to suit the need of the PNC multiple-access channel. Simulation results show that our new scheme outperforms the previously proposed schemes significantly in terms of BER without added complexity."
]
}
|
1105.3144
|
2076284780
|
A key issue in physical-layer network coding (PNC) is how to deal with the asynchrony between signals transmitted by multiple transmitters. That is, symbols transmitted by different transmitters could arrive at the receiver with symbol misalignment as well as relative carrier-phase offset. A second important issue is how to integrate channel coding with PNC to achieve reliable communication. This paper investigates these two issues and makes the following contributions: 1) We propose and investigate a general framework for decoding at the receiver based on belief propagation (BP). The framework can effectively deal with symbol and phase asynchronies while incorporating channel coding at the same time. 2) For unchannel-coded PNC, we show that for BPSK and QPSK modulations, our BP method can significantly reduce the asynchrony penalties compared with prior methods. 3) For QPSK unchannel-coded PNC, with a half symbol offset between the transmitters, our BP method can drastically reduce the performance penalty due to phase asynchrony, from more than 6 dB to no more than 1 dB. 4) For channel-coded PNC, with our BP method, both symbol and phase asynchronies actually improve the system performance compared with the perfectly synchronous case. Furthermore, the performance spread due to different combinations of symbol and phase offsets between the transmitters in channel-coded PNC is only around 1 dB. The implication of 3) is that if we could control the symbol arrival times at the receiver, it would be advantageous to deliberately introduce a half symbol offset in unchannel-coded PNC. The implication of 4) is that when channel coding is used, symbol and phase asynchronies are not major performance concerns in PNC.
|
Ref. WubbenGlobecom10 @cite_3 proposed a method for phase-asynchronous channel-coded PNC, assuming the use of Low-Density Parity-Check (LDPC) code. Different from the scheme in WubbenGlobecom10 @cite_3 , our method deals with both the phase and symbol asynchronies.
|
{
"cite_N": [
"@cite_3"
],
"mid": [
"2140922065"
],
"abstract": [
"In this paper a physical-layer network coded two-way relay system applying Low-Density Parity-Check (LDPC) codes for error correction is considered, where two sources A and B desire to exchange information with each other by the help of a relay R. The critical process in such a system is the calculation of the network-coded transmit word at the relay on basis of the superimposed channel-coded words of the two sources. For this joint channel-decoding and network-encoding task a generalized Sum-Product Algorithm (SPA) is developed. This novel iterative decoding approach outperforms other recently proposed schemes as demonstrated by simulation results."
]
}
|
1105.3144
|
2076284780
|
A key issue in physical-layer network coding (PNC) is how to deal with the asynchrony between signals transmitted by multiple transmitters. That is, symbols transmitted by different transmitters could arrive at the receiver with symbol misalignment as well as relative carrier-phase offset. A second important issue is how to integrate channel coding with PNC to achieve reliable communication. This paper investigates these two issues and makes the following contributions: 1) We propose and investigate a general framework for decoding at the receiver based on belief propagation (BP). The framework can effectively deal with symbol and phase asynchronies while incorporating channel coding at the same time. 2) For unchannel-coded PNC, we show that for BPSK and QPSK modulations, our BP method can significantly reduce the asynchrony penalties compared with prior methods. 3) For QPSK unchannel-coded PNC, with a half symbol offset between the transmitters, our BP method can drastically reduce the performance penalty due to phase asynchrony, from more than 6 dB to no more than 1 dB. 4) For channel-coded PNC, with our BP method, both symbol and phase asynchronies actually improve the system performance compared with the perfectly synchronous case. Furthermore, the performance spread due to different combinations of symbol and phase offsets between the transmitters in channel-coded PNC is only around 1 dB. The implication of 3) is that if we could control the symbol arrival times at the receiver, it would be advantageous to deliberately introduce a half symbol offset in unchannel-coded PNC. The implication of 4) is that when channel coding is used, symbol and phase asynchronies are not major performance concerns in PNC.
|
Refs. ZorziSPAWC09 @cite_5 and FPNC11 @cite_9 investigated OFDM PNC. With OFDM, the symbol offset in the time domain is translated into different phase offsets in different subcarriers in the frequency domain. Since different subcarriers experience different phase offsets, there is an averaging effect as far as performance is concerned, and the system performance is not at the mercy of the worst-case phase asynchrony. The channel-decoding and network-coding process in ZorziSPAWC09, FPNC11 @cite_5 and @cite_9 , however, are performed in a disjoint manner (using an XOR-CD decoder that will be described in Section ). By contrast, the joint channel-decoding and network-coding scheme (Jt-CNC) proposed in this paper can yield much better performance (to be presented in Section ).
|
{
"cite_N": [
"@cite_5",
"@cite_9"
],
"mid": [
"2164815412",
"2293741284"
],
"abstract": [
"Decode-and-forward physical layer network coding is one of the most high-performing ideas for wireless network coding. However, all the present schemes work under rather ideal assumptions, such as synchronous reception of the colliding signals. This paper proposes a simple and practical system which removes many of the assumptions made in the past and also designs a soft-output demodulator for this type of network coding.",
"Abstract This paper presents the first implementation of a two-way relay network based on the principle of physical-layer network coding (PNC). To date, only a simplified version of PNC, called analog network coding (ANC), has been successfully implemented. The advantage of ANC is that it is simple to implement; the disadvantage, on the other hand, is that the relay amplifies the noise along with the signal before forwarding the signal. PNC systems in which the relay performs XOR or other denoising PNC mappings of the received signal have the potential for significantly better performance. However, the implementation of such PNC systems poses many challenges. For example, the relay in a PNC system must be able to deal with symbol and carrier-phase asynchronies of the simultaneous signals received from multiple nodes, and the relay must perform channel estimation before detecting the signals. We investigate a PNC implementation in the frequency domain, referred to as FPNC, to tackle these challenges. FPNC is based on OFDM. In FPNC, XOR mapping is performed on the OFDM samples in each subcarrier rather than on the samples in the time domain. We implement FPNC on the universal soft radio peripheral (USRP) platform. Our implementation requires only moderate modifications of the packet preamble design of 802.11a g OFDM PHY. With the help of the cyclic prefix (CP) in OFDM, symbol asynchrony and the multi-path fading effects can be dealt with simultaneously in a similar fashion. Our experimental results show that symbol-synchronous and symbol-asynchronous FPNC have essentially the same BER performance, for both channel-coded and non-channel-coded FPNC systems."
]
}
|
1105.3144
|
2076284780
|
A key issue in physical-layer network coding (PNC) is how to deal with the asynchrony between signals transmitted by multiple transmitters. That is, symbols transmitted by different transmitters could arrive at the receiver with symbol misalignment as well as relative carrier-phase offset. A second important issue is how to integrate channel coding with PNC to achieve reliable communication. This paper investigates these two issues and makes the following contributions: 1) We propose and investigate a general framework for decoding at the receiver based on belief propagation (BP). The framework can effectively deal with symbol and phase asynchronies while incorporating channel coding at the same time. 2) For unchannel-coded PNC, we show that for BPSK and QPSK modulations, our BP method can significantly reduce the asynchrony penalties compared with prior methods. 3) For QPSK unchannel-coded PNC, with a half symbol offset between the transmitters, our BP method can drastically reduce the performance penalty due to phase asynchrony, from more than 6 dB to no more than 1 dB. 4) For channel-coded PNC, with our BP method, both symbol and phase asynchronies actually improve the system performance compared with the perfectly synchronous case. Furthermore, the performance spread due to different combinations of symbol and phase offsets between the transmitters in channel-coded PNC is only around 1 dB. The implication of 3) is that if we could control the symbol arrival times at the receiver, it would be advantageous to deliberately introduce a half symbol offset in unchannel-coded PNC. The implication of 4) is that when channel coding is used, symbol and phase asynchronies are not major performance concerns in PNC.
|
In this paper, we use the Repeat-Accumulate (RA) channel code to explain the principles of Jt-CNC and XOR-CD, as well as in the numerical studies. We believe that the general conclusions will be the same if the LDPC code is used instead. The application of the convolutional code in PNC has been studied previously for symbol-synchronous PNC ToTWireless10 @cite_11 . However, BP decoding was not used. The use of BP decoding for convolutional-coded PNC is an interesting area for further work because the convolutional code has lower complexity than the RA and LDPC codes.
|
{
"cite_N": [
"@cite_11"
],
"mid": [
"2158058261"
],
"abstract": [
"We study the application of convolutional codes to two-way relay networks (TWRNs) with physical-layer network coding (PNC). When a relay node decodes coded signals transmitted by two source nodes simultaneously, we show that the Viterbi algorithm (VA) can be used by approximating the maximum likelihood (ML) decoding for XORed messages as two-user decoding. In this setup, for given memory length constraint, the two source nodes can choose the same convolutional code that has the largest free distance in order to maximize the performance. Motivated from the fact that the relay node only needs to decode XORed messages, a low complexity decoding scheme is proposed using a reduced-state trellis. We show that the reduced-state decoding can achieve the same diversity gain as the full-state decoding for fading channels."
]
}
|
1105.3144
|
2076284780
|
A key issue in physical-layer network coding (PNC) is how to deal with the asynchrony between signals transmitted by multiple transmitters. That is, symbols transmitted by different transmitters could arrive at the receiver with symbol misalignment as well as relative carrier-phase offset. A second important issue is how to integrate channel coding with PNC to achieve reliable communication. This paper investigates these two issues and makes the following contributions: 1) We propose and investigate a general framework for decoding at the receiver based on belief propagation (BP). The framework can effectively deal with symbol and phase asynchronies while incorporating channel coding at the same time. 2) For unchannel-coded PNC, we show that for BPSK and QPSK modulations, our BP method can significantly reduce the asynchrony penalties compared with prior methods. 3) For QPSK unchannel-coded PNC, with a half symbol offset between the transmitters, our BP method can drastically reduce the performance penalty due to phase asynchrony, from more than 6 dB to no more than 1 dB. 4) For channel-coded PNC, with our BP method, both symbol and phase asynchronies actually improve the system performance compared with the perfectly synchronous case. Furthermore, the performance spread due to different combinations of symbol and phase offsets between the transmitters in channel-coded PNC is only around 1 dB. The implication of 3) is that if we could control the symbol arrival times at the receiver, it would be advantageous to deliberately introduce a half symbol offset in unchannel-coded PNC. The implication of 4) is that when channel coding is used, symbol and phase asynchronies are not major performance concerns in PNC.
|
The use of BP has also been proposed in a number of places in the literature in the context of multi-user detection BoutrosIT02 @cite_23 and joint detection and decoding in the presence of phase noise and frequency offset BarbieriTCom07 @cite_19 . In ZhuTWireless09 @cite_6 , BP over a factor graph that describes the joint probability law of all unknowns and observations, as in the framework used in our paper here, is used to decode one of two users in the presence of Gauss-Markov (non-block) fading. This work is along the line of collision resolution CRESM @cite_2 rather than PNC mapping. In this paper, we assume that the channels can be perfectly estimated, leaving out the detailed estimation procedure. Ref. ZhuTWireless09 @cite_6 provides a nice way to integrate the problem of channel estimation and detection using BP. The application of the technique in PNC is an interesting area for further work.
|
{
"cite_N": [
"@cite_19",
"@cite_6",
"@cite_23",
"@cite_2"
],
"mid": [
"1972245633",
"2148846951",
"2165942867",
"1483508116"
],
"abstract": [
"We present a new algorithm for joint detection and decoding of iteratively decodable codes transmitted over channels affected by a time-varying phase noise (PN) and a constant frequency offset. The proposed algorithm is obtained as an application of the sum-product algorithm to the factor graph representing the joint a posteriori distribution of the information symbols and the channel parameters given the channel output. The resulting algorithm employs the soft-output information on the coded symbols provided by the decoder and performs forward-backward recursions, taking into account the joint probability distribution of phase and frequency offset. We present simulation results for high-order coded modulation schemes based on low-density parity-check codes and serially concatenated convolutional codes, showing that, despite its low complexity, the algorithm is able to cope with a strong PN and a significant uncompensated frequency offset, thus avoiding the use of complicated data-aided frequency-estimation schemes operating on a known preamble. The robustness of the algorithm in the presence of a time-varying frequency offset is also discussed",
"Channel uncertainty and co-channel interference are two major challenges in the design of wireless systems such as future generation cellular networks. This paper studies receiver design for a wireless channel model with both time-varying Rayleigh fading and strong co-channel interference of similar form as the desired signal. It is assumed that the channel coefficients of the desired signal can be estimated through the use of pilots, whereas no pilot for the interference signal is available, as is the case in many practical wireless systems. Because the interference process is non-Gaussian, treating it as Gaussian noise generally often leads to unacceptable performance. In order to exploit the statistics of the interference and correlated fading in time, an iterative message-passing architecture is proposed for joint channel estimation, interference mitigation and decoding. Each message takes the form of a mixture of Gaussian densities where the number of components is limited so that the overall complexity of the receiver is constant per symbol regardless of the frame and code lengths. Simulation of both coded and uncoded systems shows that the receiver performs significantly better than conventional receivers with linear channel estimation, and is robust with respect to mismatch in the assumed fading model.",
"The synchronous chip-rate discrete-time CDMA channel with channel coding is analysed and the corresponding factor graph is presented. Iterative joint decoding can be derived in a simple and direct way by applying the sum-product algorithm to the factor graph. Since variables have degree 2, no computation takes place at the variable nodes. Computation at the code nodes is just soft-in soft-out (SISO) decoding, whose output is the extrinsic PMF of the coded symbols. Computation at the channel transition nodes is equivalent to MAP symbol-by-symbol multiuser detection, whose complexity is generally exponential in K. We show that several previously proposed low-complexity algorithms based on interference cancellation (IC) can be derived in a simple direct way by approximating the extrinsic PMF output by the SISO decoders either as a single mass-point PMF (hard decision) or as a Gaussian PDF with the same mean and variance (moment matching). Differently from all previously presented methods (derived from heuristic reasoning), we see clearly that extrinsic rather than a posteriori PMF should be fed back. This yields important advantages in terms of limiting achievable throughput.",
"This paper presents CRESM, a novel collision resolution method for decoding collided packets in random-access wireless networks. In a collision, overlapping signals from several sources are received simultaneously at a receiver. CRESM exploits symbol misalignment among the overlapping signals to recover the individual packets. CRESM can be adopted in 802.11 networks without modification of the transmitter design; only a simple DSP technique is needed at the receiver to decode the overlapping signals. Our simulations indicate that CRESM has better BER performance than the simplistic Successive Interference Cancellation (SIC) technique that treats interference as noise, for almost all SNR regimes. The implication of CRESM for random-access networking is significant: in general, using CRESM to resolve collisions of up to n packets, network throughput can be boosted by more than n times if the transmitters are allowed to transmit more aggressively in the MAC protocol."
]
}
|
1105.2344
|
2949186849
|
Many tasks in music information retrieval, such as recommendation, and playlist generation for online radio, fall naturally into the query-by-example setting, wherein a user queries the system by providing a song, and the system responds with a list of relevant or similar song recommendations. Such applications ultimately depend on the notion of similarity between items to produce high-quality results. Current state-of-the-art systems employ collaborative filter methods to represent musical items, effectively comparing items in terms of their constituent users. While collaborative filter techniques perform well when historical data is available for each item, their reliance on historical data impedes performance on novel or unpopular items. To combat this problem, practitioners rely on content-based similarity, which naturally extends to novel items, but is typically out-performed by collaborative filter methods. In this article, we propose a method for optimizing contentbased similarity by learning from a sample of collaborative filter data. The optimized content-based similarity metric can then be applied to answer queries on novel and unpopular items, while still maintaining high recommendation accuracy. The proposed system yields accurate and efficient representations of audio content, and experimental results show significant improvements in accuracy over competing content-based recommendation techniques.
|
Early studies of musical similarity followed the general strategy of first devising a model of audio content ( , spectral clusters @cite_37 or Gaussian mixture models @cite_7 ), applying some reasonable distance function ( , earth-mover's distance or Kullback-Leibler divergence), and then evaluating the proposed similarity model against some source of ground truth. Logan and Salomon @cite_37 and Aucouturier and Pachet @cite_7 evaluated against three notions of similarity between songs: same artist, same genre, and human survey data. Artist or genre agreement entail strongly binary notions of similarity, which due to symmetry and transitivity may be unrealistically coarse in practice. Survey data can encode subtle relationships between items, for example, triplets of the form @cite_7 @cite_16 @cite_21 . However, the expressive power of human survey data comes at a cost: while artist or genre meta-data is relatively inexpensive to collect for a set of songs, similarity survey data may require human feedback on a quadratic (for pairwise ratings) or cubic (for triplets) number of comparisons between songs.
|
{
"cite_N": [
"@cite_37",
"@cite_16",
"@cite_21",
"@cite_7"
],
"mid": [
"2126410803",
"2097219534",
"",
"48943499"
],
"abstract": [
"The present invention computer method and apparatus determines music similarity by generating a K-means (instead of Gaussian) cluster signature and a beat signature for each piece of music. The beat of the music is included in the subsequent distance measurement.",
"would be interesting and valuable to devise an automatic measure of the similarity between two musicians based only on an analysis of their recordings. To develop such a measure, however, presupposes some 'ground truth' training data describing the actual similarity between certain pairs of artists that constitute the desired output of the measure. Since artist similarity is wholly subjective, such data is not easily obtained. In this paper, we describe several attempts to construct a full matrix of similarity measures between a set of some 400 popular artists by regularizing limited subjective judgment data. We also detail our attempts to evaluate these measures by comparison with direct subjective similarity judgments collected via a web- based survey in April 2002. Overall, we find that subjective artist similarities are quite variable between users—casting doubt on the concept of a single 'ground truth'. Our best measure, however, gives reasonable agreement with the subjective data, and forms a useable stand-in. In addition, our evaluation methodology may be useful for comparing other measures of artist similarity.",
"",
"Electronic Music Distribution (EMD) is in demand of robust, automatically extracted music descriptors. We introduce a timbral similarity measures for comparing music titles. This measure is based on a Gaussian model of cepstrum coefficients. We describe the timbre extractor and the corresponding timbral similarity relation. We describe experiments in assessing the quality of the similarity relation, and show that the measure is able to yield interesting similarity relations, in particular when used in conjunction with other similarity relations. We illustrate the use of the descriptor in several EMD applications developed in the context of the Cuidado European project."
]
}
|
1105.2344
|
2949186849
|
Many tasks in music information retrieval, such as recommendation, and playlist generation for online radio, fall naturally into the query-by-example setting, wherein a user queries the system by providing a song, and the system responds with a list of relevant or similar song recommendations. Such applications ultimately depend on the notion of similarity between items to produce high-quality results. Current state-of-the-art systems employ collaborative filter methods to represent musical items, effectively comparing items in terms of their constituent users. While collaborative filter techniques perform well when historical data is available for each item, their reliance on historical data impedes performance on novel or unpopular items. To combat this problem, practitioners rely on content-based similarity, which naturally extends to novel items, but is typically out-performed by collaborative filter methods. In this article, we propose a method for optimizing contentbased similarity by learning from a sample of collaborative filter data. The optimized content-based similarity metric can then be applied to answer queries on novel and unpopular items, while still maintaining high recommendation accuracy. The proposed system yields accurate and efficient representations of audio content, and experimental results show significant improvements in accuracy over competing content-based recommendation techniques.
|
The idea to learn similarity from a collaborative filter follows from a series of positive results in music applications. Slaney and White @cite_36 demonstrate that an item-similarity metric derived from rating data matches human perception of similarity better than a content-based method. Similarly, it has been demonstrated that when combined with metric learning, collaborative filter similarity can be as effective as semantic tags for predicting survey data @cite_52 . Kim al. @cite_1 demonstrated that collaborative filter similarity vastly out-performs content-based methods for predicting semantic tags. Barrington al. @cite_10 conducted a user survey, and concluded that the iTunes Genius playlist algorithm (which is at least partially based on collaborative filters http: www.apple.com pr library 2008 09 09itunes.html ) produces playlists of equal or higher quality than competing methods based on acoustic content or meta-data.
|
{
"cite_N": [
"@cite_36",
"@cite_10",
"@cite_1",
"@cite_52"
],
"mid": [
"126054906",
"138579575",
"1247135059",
""
],
"abstract": [
"This paper describes an algorithm to measure the similarity of two multimedia objects, such as songs or movies, using users’ preferences. Much of the previous work on query-by-example (QBE) or music similarity uses detailed analysis of the object’s content. This is difficult and it is often impossible to capture how consumers react to the music. We argue that a large collection of user’s preferences is more accurate, at least in comparison to our benchmark system, at finding similar songs. We describe an algorithm based the song’s rating data, and show how this approach works by measuring its performance using an objective metric based on whether the same artist performed both songs. Our similarity results are based on 1.5 million musical judgments by 380,000 users. We test our system by generating playlists using a content-based system, our rating-based system, and a random list of songs. Music listeners greatly preferred the ratings-based playlists over the content-based and random playlists.",
"Genius is a popular commercial music recommender system that is based on collaborative filtering of huge amounts of user data. To understand the aspects of music similarity that collaborative filtering can capture, we compare Genius to two canonical music recommender systems: one based purely on artist similarity, the other purely on similarity of acoustic content. We evaluate this comparison with a user study of 185 subjects. Overall, Genius produces the best recommendations. We demonstrate that collaborative filtering can actually capture similarities between the acoustic content of songs. However, when evaluators can see the names of the recommended songs and artists, we find that artist similarity can account for the performance of Genius. A system that combines these musical cues could generate music recommendations that are as good as Genius, even when collaborative filtering data is unavailable.",
"Tags are useful text-based labels that encode semantic information about music (instrumentation, genres, emotions, geographic origins). While there are a number of ways to collect and generate tags, there is generally a data sparsity problem in which very few songs and artists have been accurately annotated with a sufficiently large set of relevant tags. We explore the idea of tag propagation to help alleviate the data sparsity problem. Tag propagation, originally proposed by , involves annotating a novel artist with tags that have been frequently associated with other similar artists. In this paper, we explore four approaches for computing artists similarity based on different sources of music information (user preference data, social tags, web documents, and audio content). We compare these approaches in terms of their ability to accurately propagate three different types of tags (genres, acoustic descriptors, social tags). We find that the approach based on collaborative filtering performs best. This is somewhat surprising considering that it is the only approach that is not explicitly based on notions of semantic similarity. We also find that tag propagation based on content-based music analysis results in relatively poor performance.",
""
]
}
|
1105.2344
|
2949186849
|
Many tasks in music information retrieval, such as recommendation, and playlist generation for online radio, fall naturally into the query-by-example setting, wherein a user queries the system by providing a song, and the system responds with a list of relevant or similar song recommendations. Such applications ultimately depend on the notion of similarity between items to produce high-quality results. Current state-of-the-art systems employ collaborative filter methods to represent musical items, effectively comparing items in terms of their constituent users. While collaborative filter techniques perform well when historical data is available for each item, their reliance on historical data impedes performance on novel or unpopular items. To combat this problem, practitioners rely on content-based similarity, which naturally extends to novel items, but is typically out-performed by collaborative filter methods. In this article, we propose a method for optimizing contentbased similarity by learning from a sample of collaborative filter data. The optimized content-based similarity metric can then be applied to answer queries on novel and unpopular items, while still maintaining high recommendation accuracy. The proposed system yields accurate and efficient representations of audio content, and experimental results show significant improvements in accuracy over competing content-based recommendation techniques.
|
Finally, there has been some previous work addressing the cold-start problem of collaborative filters for music recommendation by integrating audio content. Yoshii al. @cite_11 formulate a joint probabilistic model of both audio content and collaborative filter data in order to predict user ratings of songs (using either or both representations), whereas our goal here is to use audio data to predict the similarities derived from a collaborative filter. Our problem setting is most similar to that of Stenzel and Kamps @cite_44 , wherein a CF matrix was derived from playlist data, clustered into latent pseudo-genres,'' and classifiers were trained to predict the cluster membership of songs from audio data. Our proposed setting differs in that we derive similarity at the user level (not playlist level), and automatically learn the content-based song similarity that directly optimizes the primary quantity of interest in an information retrieval system: the quality of the rankings it induces.
|
{
"cite_N": [
"@cite_44",
"@cite_11"
],
"mid": [
"92099993",
"2161937612"
],
"abstract": [
"We observed that for multimedia data – especially music collaborative similarity measures perform much better than similarity measures derived from content-based sound features. Our observation is based on a large scale evaluation with >250,000,000 collaborative data points crawled from the web and >190,000 songs annotated with content-based sound feature sets. A song mentioned in a playlist is regarded as one collaborative data point. In this paper we present a novel approach to bridging the performance gap between collaborative and contentbased similarity measures. In the initial training phase a model vector for each song is computed, based on collaborative data. Each vector consists of 200 overlapping unlabelled 'genres' or song clusters. Instead of using explicit numerical voting, we use implicit user profile data as collaborative data source, which is, for example, available as purchase histories in many large scale ecommerce applications. After the training phase, we used support vector machines based on content-based sound features to predict the collaborative model vectors. These predicted model vectors are finally used to compute the similarity between songs. We show that combining collaborative and content-based similarity measures can help to overcome the new item problem in e-commerce applications that offer a collaborative similarity recommender as service to their customers.",
"This paper presents a hybrid music recommender system that ranks musical pieces while efficiently maintaining collaborative and content-based data, i.e., rating scores given by users and acoustic features of audio signals. This hybrid approach overcomes the conventional tradeoff between recommendation accuracy and variety of recommended artists. Collaborative filtering, which is used on e-commerce sites, cannot recommend nonbrated pieces and provides a narrow variety of artists. Content-based filtering does not have satisfactory accuracy because it is based on the heuristics that the user's favorite pieces will have similar musical content despite there being exceptions. To attain a higher recommendation accuracy along with a wider variety of artists, we use a probabilistic generative model that unifies the collaborative and content-based data in a principled way. This model can explain the generative mechanism of the observed data in the probability theory. The probability distribution over users, pieces, and features is decomposed into three conditionally independent ones by introducing latent variables. This decomposition enables us to efficiently and incrementally adapt the model for increasing numbers of users and rating scores. We evaluated our system by using audio signals of commercial CDs and their corresponding rating scores obtained from an e-commerce site. The results revealed that our system accurately recommended pieces including nonrated ones from a wide variety of artists and maintained a high degree of accuracy even when new users and rating scores were added."
]
}
|
1105.2665
|
1737971179
|
Trace slicing is a widely used technique for execution trace analysis that is effectively used in program debugging, analysis and com- prehension. In this paper, we present a backward trace slicing technique that can be used for the analysis of Rewriting Logic theories. Our trace slicing technique allows us to systematically trace back rewrite sequences modulo equational axioms (such as associativity and commu- tativity) by means of an algorithm that dynamically simplifies the traces by detecting control and data dependencies, and dropping useless data that do not influence the final result. Our methodology is particularly suitable for analyzing complex, textually-large system computations such as those delivered as counter-example traces by Maude model-checkers.
|
We have presented a backward trace-slicing technique for rewriting logic theories. The key idea consists in tracing back ---through the rewrite sequence--- all the relevant symbols of the final state that we are interested in. Preliminary experiments demonstrate that the system works very satisfactorily on our benchmarks ---e.g., we obtained trace slices that achieved a reduction of up to almost @math a @math b @math ab @math ab c d @math x @math (x) @math @math @math (x) @math ( ,P) @math = s t @math P @math t @math s$. The tracing proceeds forward, while ours employs a backward strategy that is particularly convenient for error diagnosis and program debugging. Finally, @cite_13 and @cite_20 apply to TRSs whereas we deal with the richer framework of RWL that considers equations and equational axioms, namely rewriting modulo equational theories.
|
{
"cite_N": [
"@cite_13",
"@cite_20"
],
"mid": [
"1494096444",
"598205067"
],
"abstract": [
"Program slicing is a useful technique for debugging, testing, and analyzing programs. A program slice consists of the parts of a program which (potentially) affect the values computed at some point of interest. With rare exceptions, program slices have hitherto been computed and defined in ad-hoc and language-specific ways. The principal contribution of this paper is to show that general and semantically well-founded notions of slicing and dependence can be derived in a simple, uniform way from term rewriting systems (TRSs). Our slicing technique is applicable to any language whose semantics is specified in TRS form. Moreover, we show that our method admits an efficient implementation.",
"1. Abstract reduction systems 2. First-order term rewriting systems 3. Examples of TRSs and special rewriting formats 4. Orthogonality 5. Properties of rewriting: decidability and modularity 6. Termination 7. Completion of equational specifications 8. Equivalence of reductions 9. Strategies 10. Lambda calculus 11. Higher order rewriting 12. Infinitary rewriting 13. Term graph rewriting 14. Advanced ARS theory 15. Rewriting based languages and systems 16. Mathematical background."
]
}
|
1105.1982
|
1514496236
|
Cloud computing has made it possible for a user to be able to select a computing service precisely when needed. However, certain factors such as security of data and regulatory issues will impact a user's choice of using such a service. A solution to these problems is the use of a hybrid cloud that combines a user's local computing capabilities (for mission- or organization-critical tasks) with a public cloud (for less influential tasks). We foresee three challenges that must be overcome before the adoption of a hybrid cloud approach: 1) data design: How to partition relations in a hybrid cloud? The solution to this problem must account for the sensitivity of attributes in a relation as well as the workload of a user; 2) data security: How to protect a user's data in a public cloud with encryption while enabling query processing over this encrypted data? and 3) query processing: How to execute queries efficiently over both, encrypted and unencrypted data? This paper addresses these challenges and incorporates their solutions into an add-on tool for a Hadoop and Hive based cloud computing infrastructure.
|
A lot of research has focused on the problem of data partitioning in single @cite_8 and distributed systems @cite_12 using a strategy such as that given in @cite_2 . Reference @cite_6 uses a graph-based, data-driven partitioning approach for transactional workloads. Our work explicitly considers the cost of querying encrypted attributes that will be stored on the public cloud as a result of the data partitioning process.
|
{
"cite_N": [
"@cite_2",
"@cite_6",
"@cite_12",
"@cite_8"
],
"mid": [
"1581406059",
"2133741724",
"2105252819",
"1997375126"
],
"abstract": [
"",
"We present Schism, a novel workload-aware approach for database partitioning and replication designed to improve scalability of shared-nothing distributed databases. Because distributed transactions are expensive in OLTP settings (a fact we demonstrate through a series of experiments), our partitioner attempts to minimize the number of distributed transactions, while producing balanced partitions. Schism consists of two phases: i) a workload-driven, graph-based replication partitioning phase and ii) an explanation and validation phase. The first phase creates a graph with a node per tuple (or group of tuples) and edges between nodes accessed by the same transaction, and then uses a graph partitioner to split the graph into k balanced partitions that minimize the number of cross-partition transactions. The second phase exploits machine learning techniques to find a predicate-based explanation of the partitioning strategy (i.e., a set of range predicates that represent the same replication partitioning scheme produced by the partitioner). The strengths of Schism are: i) independence from the schema layout, ii) effectiveness on n-to-n relations, typical in social network databases, iii) a unified and fine-grained approach to replication and partitioning. We implemented and tested a prototype of Schism on a wide spectrum of test cases, ranging from classical OLTP workloads (e.g., TPC-C and TPC-E), to more complex scenarios derived from social network websites (e.g., Epinions.com), whose schema contains multiple n-to-n relationships, which are known to be hard to partition. Schism consistently outperforms simple partitioning schemes, and in some cases proves superior to the best known manual partitioning, reducing the cost of distributed transactions up to 30 .",
"Physical database design is important for query performance in a shared-nothing parallel database system, in which data is horizontally partitioned among multiple independent nodes. We seek to automate the process of data partitioning. Given a workload of SQL statements, we seek to determine automatically how to partition the base data across multiple nodes to achieve overall optimal (or close to optimal) performance for that workload. Previous attempts use heuristic rules to make those decisions. These approaches fail to consider all of the interdependent aspects of query performance typically modeled by today's sophisticated query optimizers.We present a comprehensive solution to the problem that has been tightly integrated with the optimizer of a commercial shared-nothing parallel database system. Our approach uses the query optimizer itself both to recommend candidate partitions for each table that will benefit each query in the workload, and to evaluate various combinations of these candidates. We compare a rank-based enumeration method with a random-based one. Our experimental results show that the former is more effective.",
"In addition to indexes and materialized views, horizontal and vertical partitioning are important aspects of physical design in a relational database system that significantly impact performance. Horizontal partitioning also provides manageability; database administrators often require indexes and their underlying tables partitioned identically so as to make common operations such as backup restore easier. While partitioning is important, incorporating partitioning makes the problem of automating physical design much harder since: (a) The choices of partitioning can strongly interact with choices of indexes and materialized views. (b) A large new space of physical design alternatives must be considered. (c) Manageability requirements impose a new constraint on the problem. In this paper, we present novel techniques for designing a scalable solution to this integrated physical design problem that takes both performance and manageability into account. We have implemented our techniques and evaluated it on Microsoft SQL Server. Our experiments highlight: (a) the importance of taking an integrated approach to automated physical design and (b) the scalability of our techniques."
]
}
|
1105.1982
|
1514496236
|
Cloud computing has made it possible for a user to be able to select a computing service precisely when needed. However, certain factors such as security of data and regulatory issues will impact a user's choice of using such a service. A solution to these problems is the use of a hybrid cloud that combines a user's local computing capabilities (for mission- or organization-critical tasks) with a public cloud (for less influential tasks). We foresee three challenges that must be overcome before the adoption of a hybrid cloud approach: 1) data design: How to partition relations in a hybrid cloud? The solution to this problem must account for the sensitivity of attributes in a relation as well as the workload of a user; 2) data security: How to protect a user's data in a public cloud with encryption while enabling query processing over this encrypted data? and 3) query processing: How to execute queries efficiently over both, encrypted and unencrypted data? This paper addresses these challenges and incorporates their solutions into an add-on tool for a Hadoop and Hive based cloud computing infrastructure.
|
Research efforts have also been made in the area of distributed query processing in the cloud as given in @cite_0 . Distributed query processing has evolved from systems such as SDD-1 @cite_9 that assumed homogeneous databases to DISCO @cite_15 that operated on heterogeneous data sources and finally to Internet scale systems such as Astrolabe @cite_16 . Since we need to execute queries over partitions containing unencrypted and encrypted data, we may not be able to process a query entirely on a public or private cloud. This leads to a cost model that is different from models that currently exist in literature.
|
{
"cite_N": [
"@cite_0",
"@cite_9",
"@cite_16",
"@cite_15"
],
"mid": [
"2110317924",
"2094711531",
"1997369208",
"2159888038"
],
"abstract": [
"Ad-hoc data processing has proven to be a critical paradigm for Internet companies processing large volumes of unstructured data. However, the emergence of cloud-based computing, where storage and CPU are outsourced to multiple third-parties across the globe, implies large collections of highly distributed and continuously evolving data. Our demonstration combines the power and simplicity of the MapReduce abstraction with a wide-scale distributed stream processor, Mortar. While our incremental MapReduce operators avoid data re-processing, the stream processor manages the placement and physical data flow of the operators across the wide area. We demonstrate a distributed web indexing engine against which users can submit and deploy continuous MapReduce jobs. A visualization component illustrates both the incremental indexing and index searches in real time.",
"The declining cost of computer hardware and the increasing data processing needs of geographically dispersed organizations have led to substantial interest in distributed data management. SDD-1 is a distributed database management system currently being developed by Computer Corporation of America. Users interact with SDD-1 precisely as if it were a nondistributed database system because SDD-1 handles all issues arising from the distribution of data. These issues include distributed concurrency control, distributed query processing, resiliency to component failure, and distributed directory management. This paper presents an overview of the SDD-1 design and its solutions to the above problems. This paper is the first of a series of companion papers on SDD-1 (Bernstein and Shipman [2], [4], and Hammer and Shipman [14]).",
"Scalable management and self-organizational capabilities areemerging as central requirements for a generation of large-scale,highly dynamic, distributed applications. We have developed anentirely new distributed information management system calledAstrolabe. Astrolabe collects large-scale system state, permittingrapid updates and providing on-the-fly attribute aggregation. Thislatter capability permits an application to locate a resource, andalso offers a scalable way to track system state as it evolves overtime. The combination of features makes it possible to solve a widevariety of management and self-configuration problems. This paperdescribes the design of the system with a focus upon itsscalability. After describing the Astrolabe service, we presentexamples of the use of Astrolabe for locating resources,publish-subscribe, and distributed synchronization in largesystems. Astrolabe is implemented using a peer-to-peer protocol,and uses a restricted form of mobile code based on the SQL querylanguage for aggregation. This protocol gives rise to a novelconsistency model. Astrolabe addresses several securityconsiderations using a built-in PKI. The scalability of the systemis evaluated using both simulation and experiments; these confirmthat Astrolabe could scale to thousands and perhaps millions ofnodes, with information propagation delays in the tens of seconds.",
"Access to large numbers of data sources introduces new problems for users of heterogeneous distributed databases. End users and application programmers must deal with unavailable data sources. Database administrators must deal with incorporating new sources into the model. Database implementers must deal with the translation of queries between query languages and schemas. The Distributed Information Search COmponent (Disco) addresses these problems. Query processing semantics are developed to process queries over data sources which do not return answers. Data modeling techniques manage connections to data sources. The component interface to data sources flexibly handles different query languages and translates queries. This paper describes (a) the distributed mediator architecture of Disco, (b) its query processing semantics, (C) the data model and its modeling of data source connections, and (d) the interface to underlying data sources."
]
}
|
1105.1982
|
1514496236
|
Cloud computing has made it possible for a user to be able to select a computing service precisely when needed. However, certain factors such as security of data and regulatory issues will impact a user's choice of using such a service. A solution to these problems is the use of a hybrid cloud that combines a user's local computing capabilities (for mission- or organization-critical tasks) with a public cloud (for less influential tasks). We foresee three challenges that must be overcome before the adoption of a hybrid cloud approach: 1) data design: How to partition relations in a hybrid cloud? The solution to this problem must account for the sensitivity of attributes in a relation as well as the workload of a user; 2) data security: How to protect a user's data in a public cloud with encryption while enabling query processing over this encrypted data? and 3) query processing: How to execute queries efficiently over both, encrypted and unencrypted data? This paper addresses these challenges and incorporates their solutions into an add-on tool for a Hadoop and Hive based cloud computing infrastructure.
|
The area of privacy-preserving query processing has also received much attention @cite_11 @cite_14 . A homomorphic encryption based technique can be used to query over encrypted data @cite_17 but is expensive when the data size increases. We use techniques given in @cite_19 to preserve security of data. However, the difference between our work and @cite_19 is that we can store and query data locally unlike @cite_19 .
|
{
"cite_N": [
"@cite_19",
"@cite_14",
"@cite_17",
"@cite_11"
],
"mid": [
"2043508455",
"",
"1557386445",
"1544010666"
],
"abstract": [
"Rapid advances in networking and Internet technologies have fueled the emergence of the \"software as a service\" model for enterprise computing. Successful examples of commercially viable software services include rent-a-spreadsheet, electronic mail services, general storage services, disaster protection services. \"Database as a Service\" model provides users power to create, store, modify, and retrieve data from anywhere in the world, as long as they have access to the Internet. It introduces several challenges, an important issue being data privacy. It is in this context that we specifically address the issue of data privacy.There are two main privacy issues. First, the owner of the data needs to be assured that the data stored on the service-provider site is protected against data thefts from outsiders. Second, data needs to be protected even from the service providers, if the providers themselves cannot be trusted. In this paper, we focus on the second challenge. Specifically, we explore techniques to execute SQL queries over encrypted data. Our strategy is to process as much of the query as possible at the service providers' site, without having to decrypt the data. Decryption and the remainder of the query processing are performed at the client site. The paper explores an algebraic framework to split the query to minimize the computation at the client site. Results of experiments validating our approach are also presented.",
"",
"We introduce and formalize the notion of Verifiable Computation, which enables a computationally weak client to \"outsource\" the computation of a function F on various dynamically-chosen inputs x1, ...,xk to one or more workers. The workers return the result of the function evaluation, e.g., yi = F(xi), as well as a proof that the computation of F was carried out correctly on the given value xi. The primary constraint is that the verification of the proof should require substantially less computational effort than computing F(i) from scratch. We present a protocol that allows the worker to return a computationally-sound, non-interactive proof that can be verified in O(mċpoly(λ)) time, where m is the bit-length of the output of F, and λ is a security parameter. The protocol requires a one-time pre-processing stage by the client which takes O(|C|ċpoly(λ)) time, where C is the smallest known Boolean circuit computing F. Unlike previous work in this area, our scheme also provides (at no additional cost) input and output privacy for the client, meaning that the workers do not learn any information about the xi or yi values.",
"This paper introduces a new transactional “database-as-a-service” (DBaaS) called Relational Cloud. A DBaaS promises to move much of the operational burden of provisioning, configuration, scaling, performance tuning, backup, privacy, and access control from the database users to the service operator, offering lower overall costs to users. Early DBaaS efforts include Amazon RDS and Microsoft SQL Azure, which are promising in terms of establishing the market need for such a service, but which do not address three important challenges: efficient multi-tenancy, elastic scalability, and database privacy. We argue that these three challenges must be overcome before outsourcing database software and management becomes attractive to many users, and cost-effective for service providers. The key technical features of Relational Cloud include: (1) a workload-aware approach to multi-tenancy that identifies the workloads that can be co-located on a database server, achieving higher consolidation and better performance than existing approaches; (2) the use of a graph-based data partitioning algorithm to achieve near-linear elastic scale-out even for complex transactional workloads; and (3) an adjustable security scheme that enables SQL queries to run over encrypted data, including ordering operations, aggregates, and joins. An underlying theme in the design of the components of Relational Cloud is the notion of workload awareness: by monitoring query patterns and data accesses, the system obtains information useful for various optimization and security functions, reducing the configuration effort for users and operators."
]
}
|
1105.1982
|
1514496236
|
Cloud computing has made it possible for a user to be able to select a computing service precisely when needed. However, certain factors such as security of data and regulatory issues will impact a user's choice of using such a service. A solution to these problems is the use of a hybrid cloud that combines a user's local computing capabilities (for mission- or organization-critical tasks) with a public cloud (for less influential tasks). We foresee three challenges that must be overcome before the adoption of a hybrid cloud approach: 1) data design: How to partition relations in a hybrid cloud? The solution to this problem must account for the sensitivity of attributes in a relation as well as the workload of a user; 2) data security: How to protect a user's data in a public cloud with encryption while enabling query processing over this encrypted data? and 3) query processing: How to execute queries efficiently over both, encrypted and unencrypted data? This paper addresses these challenges and incorporates their solutions into an add-on tool for a Hadoop and Hive based cloud computing infrastructure.
|
We have also identified a recent work, called Relational Cloud @cite_11 , that attempts to address the problems we have identified above. The difference between our work and Relational Cloud is that our data partitioning scheme considers the cost of querying encrypted attributes stored on a public cloud. Relational Cloud partitions data using a graph-based partitioning scheme without attaching any query cost constraints. These partitions are then encrypted with multiple layers of encryption and stored on a server. A query is then executed over the encrypted data with multiple rounds of communication between a client and server without considering the cost of decrypting intermediate relations. In our work, we explicitly consider the cost of queries that involve all three components of a hybrid cloud: a query over data in a private cloud, a query over non-sensitive (i.e., unencrypted) data and, a query over sensitive (i.e., encrypted) data on a public cloud. To the best of our knowledge, ours is the first work to explicitly estimate the cost of querying over unencrypted and encrypted data in a distributed setting.
|
{
"cite_N": [
"@cite_11"
],
"mid": [
"1544010666"
],
"abstract": [
"This paper introduces a new transactional “database-as-a-service” (DBaaS) called Relational Cloud. A DBaaS promises to move much of the operational burden of provisioning, configuration, scaling, performance tuning, backup, privacy, and access control from the database users to the service operator, offering lower overall costs to users. Early DBaaS efforts include Amazon RDS and Microsoft SQL Azure, which are promising in terms of establishing the market need for such a service, but which do not address three important challenges: efficient multi-tenancy, elastic scalability, and database privacy. We argue that these three challenges must be overcome before outsourcing database software and management becomes attractive to many users, and cost-effective for service providers. The key technical features of Relational Cloud include: (1) a workload-aware approach to multi-tenancy that identifies the workloads that can be co-located on a database server, achieving higher consolidation and better performance than existing approaches; (2) the use of a graph-based data partitioning algorithm to achieve near-linear elastic scale-out even for complex transactional workloads; and (3) an adjustable security scheme that enables SQL queries to run over encrypted data, including ordering operations, aggregates, and joins. An underlying theme in the design of the components of Relational Cloud is the notion of workload awareness: by monitoring query patterns and data accesses, the system obtains information useful for various optimization and security functions, reducing the configuration effort for users and operators."
]
}
|
1105.1749
|
2952672470
|
Reinforcement Learning (RL) is a method for learning decision-making tasks that could enable robots to learn and adapt to their situation on-line. For an RL algorithm to be practical for robotic control tasks, it must learn in very few actions, while continually taking those actions in real-time. Existing model-based RL methods learn in relatively few actions, but typically take too much time between each action for practical on-line learning. In this paper, we present a novel parallel architecture for model-based RL that runs in real-time by 1) taking advantage of sample-based approximate planning methods and 2) parallelizing the acting, model learning, and planning processes such that the acting process is sufficiently fast for typical robot control cycles. We demonstrate that algorithms using this architecture perform nearly as well as methods using the typical sequential architecture when both are given unlimited time, and greatly out-perform these methods on tasks that require real-time actions such as controlling an autonomous vehicle.
|
@cite_10 takes a similar approach to these methods, performing small batch updates between each action. The framework @cite_15 extends to use as its planning algorithm, combined with permanent and transient memories using linear function approximation. This improves the planning performance of the algorithm, but the sample efficiency of these methods still does not meet the requirements for on-line learning laid out in the introduction.
|
{
"cite_N": [
"@cite_15",
"@cite_10"
],
"mid": [
"2097778153",
"1491843047"
],
"abstract": [
"We present a reinforcement learning architecture, Dyna-2, that encompasses both sample-based learning and sample-based search, and that generalises across states during both learning and search. We apply Dyna-2 to high performance Computer Go. In this domain the most successful planning methods are based on sample-based search algorithms, such as UCT, in which states are treated individually, and the most successful learning methods are based on temporal-difference learning algorithms, such as Sarsa, in which linear function approximation is used. In both cases, an estimate of the value function is formed, but in the first case it is transient, computed and then discarded after each move, whereas in the second case it is more permanent, slowly accumulating over many moves and games. The idea of Dyna-2 is for the transient planning memory and the permanent learning memory to remain separate, but for both to be based on linear function approximation and both to be updated by Sarsa. To apply Dyna-2 to 9x9 Computer Go, we use a million binary features in the function approximator, based on templates matching small fragments of the board. Using only the transient memory, Dyna-2 performed at least as well as UCT. Using both memories combined, it significantly outperformed UCT. Our program based on Dyna-2 achieved a higher rating on the Computer Go Online Server than any handcrafted or traditional search based program.",
"This paper extends previous work with Dyna, a class of architectures for intelligent systems based on approximating dynamic programming methods. Dyna architectures integrate trial-and-error (reinforcement) learning and execution-time planning into a single process operating alternately on the world and on a learned model of the world. In this paper, I present and show results for two Dyna architectures. The Dyna-PI architecture is based on dynamic programming's policy iteration method and can be related to existing AI ideas such as evaluation functions and universal plans (reactive systems). Using a navigation task, results are shown for a simple Dyna-PI system that simultaneously learns by trial and error, learns a world model, and plans optimal routes using the evolving world model. The Dyna-Q architecture is based on Watkins's Q-learning, a new kind of reinforcement learning. Dyna-Q uses a less familiar set of data structures than does Dyna-PI, but is arguably simpler to implement and use. We show that Dyna-Q architectures are easy to adapt for use in changing environments."
]
}
|
1105.0036
|
2952253187
|
We prove that there are 0 1 polytopes P that do not admit a compact LP formulation. More precisely we show that for every n there is a sets X 0,1 ^n such that conv(X) must have extension complexity at least 2^ n 2 * (1-o(1)) . In other words, every polyhedron Q that can be linearly projected on conv(X) must have exponentially many facets. In fact, the same result also applies if conv(X) is restricted to be a matroid polytope. Conditioning on NP not contained in P_ poly , our result rules out the existence of any compact formulation for the TSP polytope, even if the formulation may contain arbitrary real numbers.
|
A formulation of size @math for the permutahedron was provided by Goemans @cite_15 . In fact, @cite_15 also showed that this is tight up to constant factors. The lower bound of @cite_15 is based on the insight that the number of facets of any extension must be at least logarithmic in the number of vertices of the target polytope (which is @math for the permutahedron). The perfect matching polytope for planar graphs and graphs with bounded genus does admit a compact formulation @cite_14 @cite_5 . A useful tool to design such formulations is the Theorem of Balas @cite_9 @cite_10 , which describes the convex hull of the union of polyhedra. For @math -hard problems, one can of course not expect the existence of any compact formulation. Nevertheless, Bienstock @cite_21 gave an approximate formulation of size @math for the Knapsack polytope. This means, optimizing any linear function over the approximate polytope will give the optimum Knapsack value, up to a @math factor. For a more detailed literature review, we refer to the surveys of Conforti, Cornu jols and Zambelli @cite_11 and of Kaibel @cite_1 .
|
{
"cite_N": [
"@cite_14",
"@cite_9",
"@cite_21",
"@cite_1",
"@cite_5",
"@cite_15",
"@cite_10",
"@cite_11"
],
"mid": [
"2070982319",
"1972997295",
"2065139435",
"2337855014",
"",
"2160416472",
"2151308681",
""
],
"abstract": [
"We study the max cut problem in graphs not contractible toK5, and optimum perfect matchings in planar graphs. We prove that both problems can be formulated as polynomial size linear programs.",
"We discuss a new conceptual framework for the convexification of discrete optimization problems, and a general technique for obtaining approximations to the convex hull of the feasible set. The concepts come from disjunctive programming and the key tool is a description of the convex hull of a union of polyhedra in terms of a higher dimensional polyhedron. Although this description was known for several years, only recently was it shown by Jeroslow and Lowe to yield improved representations of discrete optimization problems. We express the feasible set of a discrete optimization problem as the intersection (conjunction) of unions of polyhedra, and define an operation that takes one such expression into another, equivalent one, with fewer conjuncts. We then introduce a class of relaxations based on replacing each conjunct (union of polyhedra) by its convex hull. The strength of the relaxations increases as the number of conjuncts decreases, and the class of relaxations forms a hierarchy that spans the spec...",
"We show that for each 0<@e@?1 there exists an extended formulation for the knapsack problem, of size polynomial in the number of variables, whose value is at most (1+@e) times the value of the integer program.",
"This survey is concerned with the size of perfect formulations for combinatorial optimization problems. By “perfect formulation”, we mean a system of linear inequalities that describes the convex hull of feasible solutions, viewed as vectors. Natural perfect formulations often have a number of inequalities that is exponential in the size of the data needed to describe the problem. Here we are particularly interested in situations where the addition of a polynomial number of extra variables allows a formulation with a polynomial number of inequalities. Such formulations are called “compact extended formulations”. We survey various tools for deriving and studying extended formulations, such as Fourier’s procedure for projection, Minkowski–Weyl’s theorem, Balas’ theorem for the union of polyhedra, Yannakakis’ theorem on the size of an extended formulation, dynamic programming, and variable discretization. For each tool that we introduce, we present one or several examples of how this tool is applied. In particular, we present compact extended formulations for several graph problems involving cuts, trees, cycles and matchings, and for the mixing set. We also present Bienstock’s approximate compact extended formulation for the knapsack problem, Goemans’ result on the size of an extended formulation for the permutahedron, and the Faenza-Kaibel extended formulation for orbitopes.",
"",
"We extend a result of Barahona, saying that T-join and perfect matching problems for planar graphs can be formulated as linear programming problems using only a polynomial number of constraints and variables, to graphs embeddable on an arbitrary, but fixed, surface.",
"In this paper we characterize the convex hull of feasible points for a disjunctive program, a class of problems which subsumes pure and mixed integer programs and many other nonconvex programming problems. Two representations are given for the convex hull of feasible points, each of which provides linear programming equivalents of the disjunctive program. The first one involves a number of new variables proportional to the number of terms in the disjunctive normal form of the logical constraints; the second one involves only the original variables and the facets of the convex hull. Among other results, we give necessary and sufficient conditions for an inequality to define a facet of the convex hull of feasible points. For the class of disjunctive programs that we call facial, we establish a property which makes it possible to obtain the convex hull of points satisfying n disjunctions, in a sequence of n steps, where each step generates the convex hull of points satisfying one disjunction only.",
""
]
}
|
1105.0074
|
1780659528
|
Recent years have seen several earnest initiatives from both academic researchers as well as open source communities to implement and deploy decentralized online social networks (DOSNs). The primary motivations for DOSNs are privacy and autonomy from big brotherly service providers. The promise of decentralization is complete freedom for end-users from any service providers both in terms of keeping privacy about content and communication, and also from any form of censorship. However decentralization introduces many challenges. One of the principal problems is to guarantee availability of data even when the data owner is not online, so that others can access the said data even when a node is offline or down. In this paper, we argue that a pragmatic design needs to explicitly allow for and leverage on system heterogeneity, and provide incentives for the resource rich participants in the system to contribute such resources. To that end we introduce SuperNova - a super-peer based DOSN architecture. While proposing the SuperNova architecture, we envision a dynamic system driven by incentives and reputation, however, investigation of such incentives and reputation, and its effect on determining peer behaviors is a subject for our future study. In this paper we instead investigate the efficacy of a super-peer based system at any time point (a snap-shot of the envisioned dynamic system), that is to say, we try to quantify the performance of SuperNova system given any (fixed) mix of peer population and strategies.
|
Work on peer-to-peer storage systems date back to the OceanStore @cite_4 initiative to achieve archival storage using end-user resources. More than a decade of P2P storage related research later, we have several hybrid or peer-to-peer storage and backup cloud like services in actual deployment, e.g., Wuala www.wuala.com , where storage task is centrally coordinated, or as academic prototypes such as FriendStore @cite_0 , where storage is carried out at nodes' friends. While allowing for options for sharing and socializing, the original design of such systems is not social networking. At a very high level, other instances of hybrid (central coordination assisted) peer-to-peer virtual community networks include internet telephony service like Skype @cite_19 , P2P massively multiplayer online games and virtual worlds @cite_15 and social peer-to-peer file sharing systems like Tribler @cite_16 . The need and challenges of realizing DOSNs were formalized recently @cite_1 , and since then, there has been a flurry of academic initiatives to realize both complete systems, as well as to surmount individual challenges. A more exhaustive survey of existing DOSN specific research covering various aspects of DOSN designs can be found in @cite_18 .
|
{
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_1",
"@cite_0",
"@cite_19",
"@cite_15",
"@cite_16"
],
"mid": [
"2108460112",
"2104210894",
"2141275242",
"2109572691",
"",
"2164433993",
"2161735290"
],
"abstract": [
"Current Online social networks (OSN) are web services run on logically centralized infrastructure. Large OSN sites use content distribution networks and thus distribute some of the load by caching for performance reasons, nevertheless there is a central repository for user and application data. This centralized nature of OSNs has several drawbacks including scalability, privacy, dependence on a provider, need for being online for every transaction, and a lack of locality. There have thus been several efforts toward decentralizing OSNs while retaining the functionalities offered by centralized OSNs. A decentralized online social network (DOSN) is a distributed system for social networking with no or limited dependency on any dedicated central infrastructure. In this chapter we explore the various motivations of a decentralized approach to online social networking, discuss several concrete proposals and types of DOSN as well as challenges and opportunities associated with decentralization.",
"OceanStore is a utility infrastructure designed to span the globe and provide continuous access to persistent information. Since this infrastructure is comprised of untrusted servers, data is protected through redundancy and cryptographic techniques. To improve performance, data is allowed to be cached anywhere, anytime. Additionally, monitoring of usage patterns allows adaptation to regional outages and denial of service attacks; monitoring also enhances performance through pro-active movement of data. A prototype implementation is currently under development.",
"Online Social Networks like Facebook, MySpace, Xing, etc. have become extremely popular. Yet they have some limitations that we want to overcome for a next generation of social networks: privacy concerns and requirements of Internet connectivity, both of which are due to web-based applications on a central site whose owner has access to all data. To overcome these limitations, we envision a paradigm shift from client-server to a peer-to-peer infrastructure coupled with encryption so that users keep control of their data and can use the social network also locally, without Internet access. This shift gives rise to many research questions intersecting networking, security, distributed systems and social network analysis, leading to a better understanding of how technology can support social interactions. This paper is an attempt to identify the core functionalities necessary to build social networking applications and services, and the research challenges in realizing them in a decentralized setting. In the tradition of research-path defining papers in the peer-to-peer community [5, 14], we highlight some challenges and opportunities for peer-to-peer in the era of social networks. We also present our own approach at realizing peer-to-peer social networks.",
"Today, it is common for users to own more than tens of gigabytes of digital pictures, videos, experimental traces, etc. Although many users already back up such data on a cheap second disk, it is desirable to also seek off-site redundancies so that important data can survive threats such as natural disasters and operator mistakes. Commercial online backup service is expensive [1, 11]. An alternative solution is to use a peer-to-peer storage system. However, existing cooperative backup systems are plagued by two long-standing problems [3, 4, 9, 19, 27]: enforcing minimal availability from participating nodes, and ensuring that nodes storing others' backup data will not deny restore service in times of need.",
"",
"We present an approach to support massively multiplayer games on peer-to-peer overlays. Our approach exploits the fact that players in MMGs display locality of interest, and therefore can form self-organizing groups based on their locations in the virtual world. To this end, we have designed scalable mechanisms to distribute the game state to the participating players and to maintain consistency in the face of node failures. The resulting system dynamically scales with the number of online players. It is more flexible and has a lower deployment cost than centralized games servers. We have implemented a simple game we call SimMud, and experimented with up to 4000 players to demonstrate the applicability of this approach.",
"Most current peer-to-peer (P2P) file-sharing systems treat their users as anonymous, unrelated entities, and completely disregard any social relationships between them. However, social phenomena such as friendship and the existence of communities of users with similar tastes or interests may well be exploited in such systems in order to increase their usability and performance. In this paper we present a novel social-based P2P file-sharing paradigm that exploits social phenomena by maintaining social networks and using these in content discovery, content recommendation, and downloading. Based on this paradigm's main concepts such as taste buddies and friends, we have designed and implemented the TRIBLER P2P file-sharing system as a set of extensions to BitTorrent. We present and discuss the design of TRIBLER, and we show evidence that TRIBLER enables fast content discovery and recommendation at a low additional overhead, and a significant improvement in download performance. Copyright © 2007 John Wiley & Sons, Ltd."
]
}
|
1105.0074
|
1780659528
|
Recent years have seen several earnest initiatives from both academic researchers as well as open source communities to implement and deploy decentralized online social networks (DOSNs). The primary motivations for DOSNs are privacy and autonomy from big brotherly service providers. The promise of decentralization is complete freedom for end-users from any service providers both in terms of keeping privacy about content and communication, and also from any form of censorship. However decentralization introduces many challenges. One of the principal problems is to guarantee availability of data even when the data owner is not online, so that others can access the said data even when a node is offline or down. In this paper, we argue that a pragmatic design needs to explicitly allow for and leverage on system heterogeneity, and provide incentives for the resource rich participants in the system to contribute such resources. To that end we introduce SuperNova - a super-peer based DOSN architecture. While proposing the SuperNova architecture, we envision a dynamic system driven by incentives and reputation, however, investigation of such incentives and reputation, and its effect on determining peer behaviors is a subject for our future study. In this paper we instead investigate the efficacy of a super-peer based system at any time point (a snap-shot of the envisioned dynamic system), that is to say, we try to quantify the performance of SuperNova system given any (fixed) mix of peer population and strategies.
|
Another recent approach looks at using user resources to help scale centralized OSNs @cite_12 by using user resources to distribute large files among friends, but in such a scenario, the users' role is more that of a P2P content distribution network, and the privacy and autonomy issues of centralized OSNs persist.
|
{
"cite_N": [
"@cite_12"
],
"mid": [
"2040908833"
],
"abstract": [
"The current Online Social Networks' infrastructure is composed by thousands of servers distributed across data-centers spread over several geographical locations. These servers store all the users' information (profile, contacts, contents, etc). Such an infrastructure incurs high operational and maintenance costs. Furthermore, this may threaten the scalability, the reliability, the availability and the privacy of the offered service. On the other hand this centralized approach gives to the OSN provider full control over a huge amount of valuable information. This information constitutes the basis of the OSN provider's business. Most of the storage capacity is dedicated to store the user's content (e.g. photos, videos, etc). We believe that OSN provider does not have strong incentive to dedicate a large part of its infrastructure to store majority part of this content. In this position paper we introduce the concept of user assisted Online Social Network (uaOSN). This novel architecture seeks to distribute the storage load associated to the content (e.g. photos, videos, etc) among the OSN's users. Thus the OSN provider keeps the control on the relevant information while reducing the operational and maintenance costs. We discuss the benefits that this proposal may produce for both, the OSN provider and the users. We also discuss the technical aspects to be considered and compare this solution to other distributed approaches."
]
}
|
1104.5200
|
2950595026
|
We study the wireless scheduling problem in the SINR model. More specifically, given a set of @math links, each a sender-receiver pair, we wish to partition (or ) the links into the minimum number of slots, each satisfying interference constraints allowing simultaneous transmission. In the basic problem, all senders transmit with the same uniform power. We give a distributed @math -approximation algorithm for the scheduling problem, matching the best ratio known for centralized algorithms. It holds in arbitrary metric space and for every length-monotone and sublinear power assignment. It is based on an algorithm of Kesselheim and V "ocking, whose analysis we improve by a logarithmic factor. We show that every distributed algorithm uses @math slots to schedule certain instances that require only two slots, which implies that the best possible absolute performance guarantee is logarithmic.
|
As in @cite_11 , our results hold in arbitrary distance metrics (and do not require the common assumption that @math ).
|
{
"cite_N": [
"@cite_11"
],
"mid": [
"2152723361"
],
"abstract": [
"We present and analyze simple distributed contention resolution protocols for wireless networks. In our setting, one is given n pairs of senders and receivers located in a metric space. Each sender wants to transmit a signal to its receiver at a prespecified power level, e. g., all senders use the same, uniform power level as it is typically implemented in practice. Our analysis is based on the physical model in which the success of a transmission depends on the Signal-to-Interference-plus-Noise-Ratio (SINR). The objective is to minimize the number of time slots until all signals are successfully transmitted. Our main technical contribution is the introduction of a measure called maximum average affectance enabling us to analyze random contention-resolution algorithms in which each packet is transmitted in each step with a fixed probability depending on the maximum average affectance. We prove that the schedule generated this way is only an O(log2 n) factor longer than the optimal one, provided that the prespecified power levels satisfy natural monontonicity properties. By modifying the algorithm, senders need not to know the maximum average affectance in advance but only static information about the network. In addition, we extend our approach to multi-hop communication achieving the same appoximation factor."
]
}
|
1104.5200
|
2950595026
|
We study the wireless scheduling problem in the SINR model. More specifically, given a set of @math links, each a sender-receiver pair, we wish to partition (or ) the links into the minimum number of slots, each satisfying interference constraints allowing simultaneous transmission. In the basic problem, all senders transmit with the same uniform power. We give a distributed @math -approximation algorithm for the scheduling problem, matching the best ratio known for centralized algorithms. It holds in arbitrary metric space and for every length-monotone and sublinear power assignment. It is based on an algorithm of Kesselheim and V "ocking, whose analysis we improve by a logarithmic factor. We show that every distributed algorithm uses @math slots to schedule certain instances that require only two slots, which implies that the best possible absolute performance guarantee is logarithmic.
|
The scheduling problem has been profitably studied in the setting by a number of works. The problem is known to be NP-hard @cite_10 . For length-monotone, sub-linear power assignments, a @math approximation for general metrics has been achieved recently @cite_6 following up on earlier work @cite_10 @cite_15 . In the birectional setting with power control, Fangh " a @cite_17 provided a @math approximation algorithm, recently improved to @math @cite_13 @cite_6 . For linear power on the plane, @cite_18 provides an additive approximation algorithm of @math . On the plane, @math approximation for power control for uni-directional links has been recently achieved @cite_16 . @cite_12 provide a @math approximation to the joint multi-hop scheduling and routing problem.
|
{
"cite_N": [
"@cite_18",
"@cite_10",
"@cite_6",
"@cite_15",
"@cite_16",
"@cite_13",
"@cite_12",
"@cite_17"
],
"mid": [
"2154125468",
"2106242763",
"2950068681",
"2100316242",
"1584433497",
"1750839748",
"",
""
],
"abstract": [
"In the interference scheduling problem, one is given a set of n communication requests described by source-destination pairs of nodes from a metric space. The nodes correspond to devices in a wireless network. Each pair must be assigned a power level and a color such that the pairs in each color class can communicate simultaneously at the specified power levels. The feasibility of simultaneous communication within a color class is defined in terms of the Signal to Interference plus Noise Ratio (SINR) that compares the strength of a signal at a receiver to the sum of the strengths of other signals. The objective is to minimize the number of colors as this corresponds to the time needed to schedule all requests. We introduce an instance-based measure of interference, denoted by I, that enables us to improve on previous results for the interference scheduling problem. We prove the upper and lower bounds in terms of I on the number of steps needed for scheduling a set of requests. For general power assignments, we prove a lower bound of @W(I ([email protected])) steps, where @D denotes the aspect ratio of the metric. When restricting to the two-dimensional Euclidean space (as in the previous work) the bound improves to @W(I [email protected]). Alternatively, when restricting to linear power assignments, the lower bound improves even to @W(I). The lower bounds are complemented by an efficient algorithm computing a schedule for linear power assignments using only O(Ilogn) steps. A more sophisticated algorithm computes a schedule using even only O(I+log^2n) steps. For dense instances in the two-dimensional Euclidean space, this gives a constant factor approximation for scheduling under linear power assignments, which shows that the price for using linear (and, hence, energy-efficient) power assignments is bounded by a factor of O([email protected]). In addition, we extend these results for single-hop scheduling to multi-hop scheduling and combined scheduling and routing problems, where our analysis generalizes the previous results towards general metrics and improves on the previous approximation factors.",
"In this work we study the problem of determining the throughput capacity of a wireless network. We propose a scheduling algorithm to achieve this capacity within an approximation factor. Our analysis is performed in the physical interference model, where nodes are arbitrarily distributed in Euclidean space. We consider the problem separately from the routing problem and the power control problem, i.e., all requests are single-hop, and all nodes transmit at a fixed power level. The existing solutions to this problem have either concentrated on special-case topologies, or presented optimality guarantees which become arbitrarily bad (linear in the number of nodes) depending on the network's topology. We propose the first scheduling algorithm with approximation guarantee independent of the topology of the network. The algorithm has a constant approximation guarantee for the problem of maximizing the number of links scheduled in one time-slot. Furthermore, we obtain a O(log n) approximation for the problem of minimizing the number of time slots needed to schedule a given set of requests. Simulation results indicate that our algorithm does not only have an exponentially better approximation ratio in theory, but also achieves superior performance in various practical network scenarios. Furthermore, we prove that the analysis of the algorithm is extendable to higher-dimensional Euclidean spaces, and to more realistic bounded-distortion spaces, induced by non-isotropic signal distortions. Finally, we show that it is NP-hard to approximate the scheduling problem to within n 1-epsiv factor, for any constant epsiv > 0, in the non-geometric SINR model, in which path-loss is independent of the Euclidean coordinates of the nodes.",
"The capacity of a wireless network is the maximum possible amount of simultaneous communication, taking interference into account. Formally, we treat the following problem. Given is a set of links, each a sender-receiver pair located in a metric space, and an assignment of power to the senders. We seek a maximum subset of links that are feasible in the SINR model: namely, the signal received on each link should be larger than the sum of the interferences from the other links. We give a constant-factor approximation that holds for any length-monotone, sub-linear power assignment and any distance metric. We use this to give essentially tight characterizations of capacity maximization under power control using oblivious power assignments. Specifically, we show that the mean power assignment is optimal for capacity maximization of bi-directional links, and give a tight @math -approximation of scheduling bi-directional links with power control using oblivious power. For uni-directional links we give a nearly optimal @math -approximation to the power control problem using mean power, where @math is the ratio of longest and shortest links. Combined, these results clarify significantly the centralized complexity of wireless communication problems.",
"In this paper we address a common question in wireless communication: How long does it take to satisfy an arbitrary set of wireless communication requests? This problem is known as the wireless scheduling problem. Our main result proves that wireless scheduling is in APX. In addition we present a robustness result, showing that constant parameter and model changes will modify the result only by a constant.",
"In modern wireless networks devices are able to set the power for each transmission carried out. Experimental but also theoretical results indicate that such power control can improve the network capacity significantly. We study this problem in the physical interference model using SINR constraints. In the SINR capacity maximization problem, we are given n pairs of senders and receivers, located in a metric space (usually a so-called fading metric). The algorithm shall select a subset of these pairs and choose a power level for each of them with the objective of maximizing the number of simultaneous communications. This is, the selected pairs have to satisfy the SINR constraints with respect to the chosen powers. We present the first algorithm achieving a constant-factor approximation in fading metrics. The best previous results depend on further network parameters such as the ratio of the maximum and the minimum distance between a sender and its receiver. Expressed only in terms of n, they are (trivial) Ω(n) approximations. Our algorithm still achieves an O(log n) approximation if we only assume to have a general metric space rather than a fading metric. Furthermore, existing approaches work well together with the algorithm allowing it to be used in singlehop and multi-hop scheduling scenarios. Here, we also get polylog n approximations.",
"We consider the scheduling of arbitrary wireless links in the physical model of interference to minimize the time for satisfying all requests. We study here the combined problem of scheduling and power control, where we seek both an assignment of power settings and a partition of the links so that each set satisfies the signal-to-interference-plus-noise (SINR) constraints. We give an algorithm that attains an approximation ratio of O(log n ċ log log Δ), where n is the number of links and Δ is the ratio between the longest and the shortest link length. Under the natural assumption that lengths are represented in binary, this gives the first approximation ratio that is polylogarithmic in the size of the input. The algorithm has the desirable property of using an oblivious power assignment, where the power assigned to a sender depends only on the length of the link. We give evidence that this dependence on Δ is unavoidable, showing that any reasonably behaving oblivious power assignment results in a Ω(log log Δ)-approximation. These results hold also for the (weighted) capacity problem of finding a maximum (weighted) subset of links that can be scheduled in a single time slot. In addition, we obtain improved approximation for a bidirectional variant of the scheduling problem, give partial answers to questions about the utility of graphs for modeling physical interference, and generalize the setting from the standard 2-dimensional Euclidean plane to doubling metrics. Finally, we explore the utility of graph models in capturing wireless interference.",
"",
""
]
}
|
1104.5200
|
2950595026
|
We study the wireless scheduling problem in the SINR model. More specifically, given a set of @math links, each a sender-receiver pair, we wish to partition (or ) the links into the minimum number of slots, each satisfying interference constraints allowing simultaneous transmission. In the basic problem, all senders transmit with the same uniform power. We give a distributed @math -approximation algorithm for the scheduling problem, matching the best ratio known for centralized algorithms. It holds in arbitrary metric space and for every length-monotone and sublinear power assignment. It is based on an algorithm of Kesselheim and V "ocking, whose analysis we improve by a logarithmic factor. We show that every distributed algorithm uses @math slots to schedule certain instances that require only two slots, which implies that the best possible absolute performance guarantee is logarithmic.
|
In the distributed setting, the related problem (where one wants to find the maximum subset of @math that can be transmitted in a single slot) has been studied a series of papers @cite_4 @cite_0 @cite_8 , and have culminated in a @math -approximation algorithm for uniform power @cite_8 . However, these game-theoretic algorithms take time polynomial in @math to converge, thus can be seen more appropriately to determine capacity, instead of realizing it in real time".
|
{
"cite_N": [
"@cite_0",
"@cite_4",
"@cite_8"
],
"mid": [
"2095134086",
"2129219909",
"2027406966"
],
"abstract": [
"In this paper we consider the problem of maximizing wireless network capacity (a.k.a. one-shot scheduling) in both the protocol and physical models. We give the first distributed algorithms with provable guarantees in the physical model, and show how they can be generalized to more complicated metrics and settings in which the physical assumptions are slightly violated. We also give the first algorithms in the protocol model that do not assume transmitters can coordinate with their neighbors in the interference graph, so every transmitter chooses whether to broadcast based purely on local events. Our techniques draw heavily from algorithmic game theory and machine learning theory, even though our goal is a distributed algorithm. Indeed, our main results allow every transmitter to run any algorithm it wants, so long as its algorithm has a learning-theoretic property known as no-regret in a game-theoretic setting.",
"In this paper we consider the problem of maximizing the number of supported connections in arbitrary wireless networks where a transmission is supported if and only if the signal-to-interference-plus-noise ratio at the receiver is greater than some threshold. The aim is to choose transmission powers for each connection so as to maximize the number of connections for which this threshold is met. We believe that analyzing this problem is important both in its own right and also because it arises as a subproblem in many other areas of wireless networking. We study both the complexity of the problem and also present some game theoretic results regarding capacity that is achieved by completely distributed algorithms. We also feel that this problem is intriguing since it involves both continuous aspects (i.e. choosing the transmission powers) as well as discrete aspects (i.e. which connections should be supported). Our results are: ldr We show that maximizing the number of supported connections is NP-hard, even when there is no background noise. This is in contrast to the problem of determining whether or not a given set of connections is feasible since that problem can be solved via linear programming. ldr We present a number of approximation algorithms for the problem. All of these approximation algorithms run in polynomial time and have an approximation ratio that is independent of the number of connections. ldr We examine a completely distributed algorithm and analyze it as a game in which a connection receives a positive payoff if it is successful and a negative payoff if it is unsuccessful while transmitting with nonzero power. We show that in this game there is not necessarily a pure Nash equilibrium but if such an equilibrium does exist the corresponding price of anarchy is independent of the number of connections. We also show that a mixed Nash equilibrium corresponds to a probabilistic transmission strategy and in this case such an equilibrium always exists and has a price of anarchy that is independent of the number of connections. This work was supported by NSF contract CCF-0728980 and was performed while the second author was visiting Bell Labs in Summer, 2008.",
"We consider the capacity problem (or, the single slot scheduling problem) in wireless networks. Our goal is to maximize the number of successful connections in arbitrary wirelessnetworks where a transmission is successful only if the signal-to-interference-plus-noise ratio at the receiver is greater than some threshold. We study a game theoretic approach towards capacity maximization introduced by Andrews and Dinitz (INFOCOM 2009) and Dinitz (INFOCOM 2010). We prove vastly improved bounds for the game theoretic algorithm. In doing so, we achieve the first distributed constant factor approximation algorithm for capacity maximization for the uniform power assignment. When compared to the optimum where links may use an arbitrary power assignment, we prove a O(log Δ) approximation, where Δ is the ratio between the largest and the smallest link in the network. This is an exponential improvement of the approximation factor compared to existing results for distributed algorithms. All our results work for links located in any metric space. In addition, we provide simulation studies clarifying the picture on distributed algorithms for capacity maximization."
]
}
|
1104.5200
|
2950595026
|
We study the wireless scheduling problem in the SINR model. More specifically, given a set of @math links, each a sender-receiver pair, we wish to partition (or ) the links into the minimum number of slots, each satisfying interference constraints allowing simultaneous transmission. In the basic problem, all senders transmit with the same uniform power. We give a distributed @math -approximation algorithm for the scheduling problem, matching the best ratio known for centralized algorithms. It holds in arbitrary metric space and for every length-monotone and sublinear power assignment. It is based on an algorithm of Kesselheim and V "ocking, whose analysis we improve by a logarithmic factor. We show that every distributed algorithm uses @math slots to schedule certain instances that require only two slots, which implies that the best possible absolute performance guarantee is logarithmic.
|
For distributed scheduling, the only work we are aware of remains the interesting paper by Kesselheim and V "ocking @cite_11 , who give a @math distributed approximation algorithm for the scheduling problem with any fixed length-monotone and sub-linear power assignment. They consider the model with no free acknowledgements, however their results do not improve if free acknowledgements are assumed. Thus in all cases considered, our results constitute a @math factor improvement.
|
{
"cite_N": [
"@cite_11"
],
"mid": [
"2152723361"
],
"abstract": [
"We present and analyze simple distributed contention resolution protocols for wireless networks. In our setting, one is given n pairs of senders and receivers located in a metric space. Each sender wants to transmit a signal to its receiver at a prespecified power level, e. g., all senders use the same, uniform power level as it is typically implemented in practice. Our analysis is based on the physical model in which the success of a transmission depends on the Signal-to-Interference-plus-Noise-Ratio (SINR). The objective is to minimize the number of time slots until all signals are successfully transmitted. Our main technical contribution is the introduction of a measure called maximum average affectance enabling us to analyze random contention-resolution algorithms in which each packet is transmitted in each step with a fixed probability depending on the maximum average affectance. We prove that the schedule generated this way is only an O(log2 n) factor longer than the optimal one, provided that the prespecified power levels satisfy natural monontonicity properties. By modifying the algorithm, senders need not to know the maximum average affectance in advance but only static information about the network. In addition, we extend our approach to multi-hop communication achieving the same appoximation factor."
]
}
|
1104.5200
|
2950595026
|
We study the wireless scheduling problem in the SINR model. More specifically, given a set of @math links, each a sender-receiver pair, we wish to partition (or ) the links into the minimum number of slots, each satisfying interference constraints allowing simultaneous transmission. In the basic problem, all senders transmit with the same uniform power. We give a distributed @math -approximation algorithm for the scheduling problem, matching the best ratio known for centralized algorithms. It holds in arbitrary metric space and for every length-monotone and sublinear power assignment. It is based on an algorithm of Kesselheim and V "ocking, whose analysis we improve by a logarithmic factor. We show that every distributed algorithm uses @math slots to schedule certain instances that require only two slots, which implies that the best possible absolute performance guarantee is logarithmic.
|
In @cite_11 , the authors introduce a versatile measure, the maximum average affectance @math , defined by @math The authors then show two results. On the one hand, they show that @math where @math . On the other hand, they present a natural algorithm (we use the same algorithm in this work) which schedules all links in @math slots, thus achieving a @math approximation. We can show (Appendix ) that both of these bounds are tight. Thus it is not possible to obtain improved approximation using the measure @math .
|
{
"cite_N": [
"@cite_11"
],
"mid": [
"2152723361"
],
"abstract": [
"We present and analyze simple distributed contention resolution protocols for wireless networks. In our setting, one is given n pairs of senders and receivers located in a metric space. Each sender wants to transmit a signal to its receiver at a prespecified power level, e. g., all senders use the same, uniform power level as it is typically implemented in practice. Our analysis is based on the physical model in which the success of a transmission depends on the Signal-to-Interference-plus-Noise-Ratio (SINR). The objective is to minimize the number of time slots until all signals are successfully transmitted. Our main technical contribution is the introduction of a measure called maximum average affectance enabling us to analyze random contention-resolution algorithms in which each packet is transmitted in each step with a fixed probability depending on the maximum average affectance. We prove that the schedule generated this way is only an O(log2 n) factor longer than the optimal one, provided that the prespecified power levels satisfy natural monontonicity properties. By modifying the algorithm, senders need not to know the maximum average affectance in advance but only static information about the network. In addition, we extend our approach to multi-hop communication achieving the same appoximation factor."
]
}
|
1104.5392
|
1761184600
|
The Cloud Computing paradigm is providing system architects with a new powerful tool for building scalable applications. Clouds allow allocation of resources on a "pay-as-you-go" model, so that additional resources can be requested during peak loads and released after that. However, this flexibility asks for appropriate dynamic reconfiguration strategies. In this paper we describe SAVER (qoS-Aware workflows oVER the Cloud), a QoS-aware algorithm for executing workflows involving Web Services hosted in a Cloud environment. SAVER allows execution of arbitrary workflows subject to response time constraints. SAVER uses a passive monitor to identify workload fluctuations based on the observed system response time. The information collected by the monitor is used by a planner component to identify the minimum number of instances of each Web Service which should be allocated in order to satisfy the response time constraint. SAVER uses a simple Queueing Network (QN) model to identify the optimal resource allocation. Specifically, the QN model is used to identify bottlenecks, and predict the system performance as Cloud resources are allocated or released. The parameters used to evaluate the model are those collected by the monitor, which means that SAVER does not require any particular knowledge of the Web Services and workflows being executed. Our approach has been validated through numerical simulations, whose results are reported in this paper.
|
Several research contributions have previously addressed the issue of optimizing the resource allocation in cluster-based service centers. Recently, with the emerging of virtualization approaches and Cloud computing, additional research on automatic resource management has been conducted. In this section we briefly review some recent results; some of them take advantage of control theory-based feedback loops @cite_7 @cite_14 , machine learning techniques @cite_19 @cite_1 , or utility-based optimization techniques @cite_13 @cite_2 . When moving to virtualized environments the resource allocation problem becomes even more complex because of the introduction of virtual resources @cite_2 . Several approaches have been proposed for QoS and resource management at run-time @cite_6 @cite_7 @cite_20 @cite_4 @cite_8 @cite_0 .
|
{
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_7",
"@cite_8",
"@cite_1",
"@cite_6",
"@cite_0",
"@cite_19",
"@cite_2",
"@cite_13",
"@cite_20"
],
"mid": [
"",
"",
"1975480691",
"1969687573",
"",
"2137771166",
"2149027035",
"2141450509",
"2010424232",
"2101438247",
"2128400269"
],
"abstract": [
"",
"",
"In this paper, we discuss several facets of optimization in cloud computing, the corresponding challenges and propose an architecture for addressing those challenges. We consider a layered cloud where various cloud layers virtualize parts of the cloud infrastructure. The architecture takes into account different stakeholders in the cloud (infrastructure providers, platform providers, application providers and end users). The architecture supports self-management by automating most of the activities pertaining to optimization: monitoring, analysis and prediction, planning and execution.",
"The adoption of virtualization and Cloud Computing technologies promises a number of benefits such as increased flexibility, better energy efficiency and lower operating costs for IT systems. However, highly variable workloads make it challenging to provide quality-of-service guarantees while at the same time ensuring efficient resource utilization. To avoid violations of service-level agreements (SLAs) or inefficient resource usage, resource allocations have to be adapted continuously during operation to reflect changes in application workloads. In this paper, we present a novel approach to self-adaptive resource allocation in virtualized environments based on online architecture-level performance models. We present a detailed case study of a representative enterprise application, the new SPECjEnterprise2010 benchmark, deployed in a virtualized cluster environment. The case study serves as a proof-of-concept demonstrating the effectiveness and practical applicability of our approach.",
"",
"This paper presents a method for achieving optimization in clouds by using performance models in the development, deployment and operations of the applications running in the cloud. We show the architecture of the cloud, the services offered by the cloud to support optimization and the methodology used by developers to enable runtime optimization of the clouds. An optimization algorithm is presented which accommodates different goals, different scopes and timescales of optimization actions, and different control algorithms. The optimization here maximizes profits in the cloud constrained by QoS and SLAs across a large variety of workloads.",
"In computing clouds, it is desirable to avoid wasting resources as a result of under-utilization and to avoid lengthy response times as a result of over-utilization. In this paper, we propose a new approach for dynamic autonomous resource management in computing clouds. The main contribution of this work is two-fold. First, we adopt a distributed architecture where resource management is decomposed into independent tasks, each of which is performed by Autonomous Node Agents that are tightly coupled with the physical machines in a data center. Second, the Autonomous Node Agents carry out configurations in parallel through Multiple Criteria Decision Analysis using the PROMETHEE method. Simulation results show that the proposed approach is promising in terms of scalability, feasibility and flexibility.",
"Getting multiple autonomic managers to work together towards a common goal is a significant architectural and algorithmic challenge, as noted in the ICAC 2006 panel discussion regarding \"Can we build effective multi-vendor autonomic systems?\" We address this challenge in a real small-scale system that processes web transactions. An administrator uses a utility function to define a set of power and performance objectives. Rather than creating a central controller to manage performance and power simultaneously, we use two existing IBM products, one that manages performance and one that manages power by controlling clock frequency. We demonstrate that, with good architectural and algorithmic choices established through trial and error, the two managers can indeed work together to act in accordance with a flexible set of power-performance objectives and tradeoffs, resulting in power savings of approximately 10 . Key elements of our approach include (a) a feedback controller that establishes a power cap (a limit on consumed power) by manipulating clock frequency and (b) reinforcement learning, which adoptively learns models of the dependence of performance and power consumption on workload intensity and the powercap.",
"Recent advances in hardware and software virtualization offer unprecedented management capabilities for the mapping of virtual resources to physical resources. It is highly desirable to further create a \"service hosting abstraction\" that allows application owners to focus on service level objectives (SLOs) for their applications. This calls for a resource management solution that achieves the SLOs for many applications in response to changing data center conditions and hides the complexity from both application owners and data center operators. In this paper, we describe an automated capacity and workload management system that integrates multiple resource controllers at three different scopes and time scales. Simulation and experimental results confirm that such an integrated solution ensures efficient and effective use of data center resources while reducing service level violations for high priority applications.",
"Since many Internet applications employ a multitier architecture, in this article, we focus on the problem of analytically modeling the behavior of such applications. We present a model based on a network of queues where the queues represent different tiers of the application. Our model is sufficiently general to capture (i) the behavior of tiers with significantly different performance characteristics and (ii) application idiosyncrasies such as session-based workloads, tier replication, load imbalances across replicas, and caching at intermediate tiers. We validate our model using real multitier applications running on a Linux server cluster. Our experiments indicate that our model faithfully captures the performance of these applications for a number of workloads and configurations. Furthermore, our model successfully handles a comprehensive range of resource utilization---from 0 to near saturation for the CPU---for two separate tiers. For a variety of scenarios, including those with caching at one of the application tiers, the average response times predicted by our model were within the 95p confidence intervals of the observed average response times. Our experiments also demonstrate the utility of the model for dynamic capacity provisioning, performance prediction, bottleneck identification, and session policing. In one scenario, where the request arrival rate increased from less than 1500 to nearly 4200 requests minute, a dynamic provisioning technique employing our model was able to maintain response time targets by increasing the capacity of two of the tiers by factors of 2 and 3.5, respectively.",
"Server consolidation based on virtualization is an important technique for improving power efficiency and resource utilization in cloud infrastructures. However, to ensure satisfactory performance on shared resources under changing application workloads, dynamic management of the resource pool via online adaptation is critical. The inherent tradeoffs between power and performance as well as between the cost of an adaptation and its benefits make such management challenging. In this paper, we present Mistral, a holistic controller framework that optimizes power consumption, performance benefits, and the transient costs incurred by various adaptations and the controller itself to maximize overall utility. Mistral can handle multiple distributed applications and large-scale infrastructures through a multi-level adaptation hierarchy and scalable optimization algorithm. We show that our approach outstrips other strategies that address the tradeoff between only two of the objectives (power, performance, and transient costs)."
]
}
|
1104.5392
|
1761184600
|
The Cloud Computing paradigm is providing system architects with a new powerful tool for building scalable applications. Clouds allow allocation of resources on a "pay-as-you-go" model, so that additional resources can be requested during peak loads and released after that. However, this flexibility asks for appropriate dynamic reconfiguration strategies. In this paper we describe SAVER (qoS-Aware workflows oVER the Cloud), a QoS-aware algorithm for executing workflows involving Web Services hosted in a Cloud environment. SAVER allows execution of arbitrary workflows subject to response time constraints. SAVER uses a passive monitor to identify workload fluctuations based on the observed system response time. The information collected by the monitor is used by a planner component to identify the minimum number of instances of each Web Service which should be allocated in order to satisfy the response time constraint. SAVER uses a simple Queueing Network (QN) model to identify the optimal resource allocation. Specifically, the QN model is used to identify bottlenecks, and predict the system performance as Cloud resources are allocated or released. The parameters used to evaluate the model are those collected by the monitor, which means that SAVER does not require any particular knowledge of the Web Services and workflows being executed. Our approach has been validated through numerical simulations, whose results are reported in this paper.
|
Canfora @cite_18 describe a QoS -aware service discovery and late-binding mechanism which is able to automatically adapt to changes of QoS attributes in order to meet the SLA . The authors consider the execution of workflows over a set of WS , such that each WS has multiple functionally equivalent implementations. Genetic Algorithms are use to bind each WS to one of the available implementations, so that a fitness function is maximized. The binding is done at run-time, and depends on the values of QoS attributes which are monitored by the system. It should be observed that in we consider a different scenario, in which each WS has just one implementation which however can be instantiated multiple times. The goal of is to satisfy a specific QoS requirement (mean execution time of workflows below a given threshold) with the minimum number of instances.
|
{
"cite_N": [
"@cite_18"
],
"mid": [
"2003131531"
],
"abstract": [
"Run-time service discovery and late-binding constitute some of the most challenging issues of service-oriented software engineering. For late-binding to be effective in the case of composite services, a QoS-aware composition mechanism is needed. This means determining the set of services that, once composed, not only will perform the required functionality, but also will best contribute to achieve the level of QoS promised in service level agreements (SLAs). However, QoS-aware composition relies on estimated QoS values and workflow execution paths previously obtained using a monitoring mechanism. At run-time, the actual QoS values may deviate from the estimations, or the execution path may not be the one foreseen. These changes could increase the risk of breaking SLAs and obtaining a poor QoS. Such a risk could be avoided by replanning the service bindings of the workflow slice still to be executed. This paper proposes an approach to trigger and perform composite service replanning during execution. An evaluation has been performed simulating execution and replanning on a set of composite service workflows."
]
}
|
1104.4513
|
1595554894
|
This work introduces the minimax Laplace transform method, a modification of the cumulant-based matrix Laplace transform method developed in [Tro11c] that yields both upper and lower bounds on each eigenvalue of a sum of random self-adjoint matrices. This machinery is used to derive eigenvalue analogs of the classical Chernoff, Bennett, and Bernstein bounds. Two examples demonstrate the efficacy of the minimax Laplace transform. The first concerns the effects of column sparsification on the spectrum of a matrix with orthonormal rows. Here, the behavior of the singular values can be described in terms of coherence-like quantities. The second example addresses the question of relative accuracy in the estimation of eigenvalues of the covariance matrix of a random process. Standard results on the convergence of sample covariance matrices provide bounds on the number of samples needed to obtain relative accuracy in the spectral norm, but these results only guarantee relative accuracy in the estimate of the maximum eigenvalue. The minimax Laplace transform argument establishes that if the lowest eigenvalues decay sufficiently fast, Ω(e^(-2)κ^2_l l log p) samples, where κ_l = λ_1(C) λ_l(C), are sufficient to ensure that the dominant l eigenvalues of the covariance matrix of a N(0,C) random vector are estimated to within a factor of 1 ± e with high probability.
|
The modern asymptotic theory began in the 1950s when physicists observed that, on certain scales, the behavior of a quantum system is described by the spectrum of a random matrix @cite_49 . They further observed the phenomenon of : as the dimension increases, the spectral statistics become independent of the distribution of the random matrix; instead, they are determined by the symmetries of the distribution @cite_50 . Since these initial observations, physicists, statisticians, engineers, and mathematicians have found manifold applications of the asymptotic theory in high-dimensional statistics @cite_20 @cite_9 @cite_21 , physics @cite_16 @cite_49 , wireless communication @cite_4 @cite_75 , and pure mathematics @cite_72 @cite_14 , to mention only a few areas.
|
{
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_9",
"@cite_21",
"@cite_72",
"@cite_50",
"@cite_49",
"@cite_16",
"@cite_75",
"@cite_20"
],
"mid": [
"1991057622",
"2072184935",
"2120350343",
"2118800758",
"2068184834",
"1526671231",
"",
"1963596092",
"2108600031",
"1520752838"
],
"abstract": [
"Comparison between formulae for the counting functions of the heights tn of the Riemann zeros and of semiclassical quantum eigenvalues En suggests that the tn are eigenvalues of an (unknown) hermitean operator H, obtained by quantizing a classical dynamical system with hamiltonian Hcl. Many features of Hcl are provided by the analogy; for example, the \"Riemann dynamics\" should be chaotic and have periodic orbits whose periods are multiples of logarithms of prime numbers. Statistics of the tn have a similar structure to those of the semiclassical En; in particular, they display random-matrix universality at short range, and nonuniversal behaviour over longer ranges. Very refined features of the statistics of the tn can be computed accurately from formulae with quantum analogues. The Riemann-Siegel formula for the zeta function is described in detail. Its interpretation as a relation between long and short periodic orbits gives further insights into the quantum spectral fluctuations. We speculate that the Riemann dynamics is related to the trajectories generated by the classical hamiltonian Hcl=XP.",
"Random matrix theory has found many applications in physics, statistics and engineering since its inception. Although early developments were motivated by practical experimental problems, random matrices are now used in fields as diverse as Riemann hypothesis, stochastic differential equations, condensed matter physics, statistical physics, chaotic systems, numerical linear algebra, neural networks, multivariate statistics, information theory, signal processing and small-world networks. This article provides a tutorial on random matrices which provides an overview of the theory and brings together in one source the most significant results recently obtained. Furthermore, the application of random matrix theory to the fundamental limits of wireless communication channels is described in depth.",
"Tables. Commonly Used Notation. 1. The Multivariate Normal and Related Distributions. 2. Jacobians, Exterior Products, Kronecker Products, and Related Topics. 3. Samples from a Multivariate Normal Distribution, and the Wishart and Multivariate BETA Distributions. 4. Some Results Concerning Decision-Theoretic Estimation of the Parameters of a Multivariate Normal Distribution. 5. Correlation Coefficients. 6. Invariant Tests and Some Applications. 7. Zonal Polynomials and Some Functions of Matrix Argument. 8. Some Standard Tests on Covariance Matrices and Mean Vectors. 9. Principal Components and Related Topics. 10. The Multivariate Linear Model. 11. Testing Independence Between k Sets of Variables and Canonical Correlation Analysis. Appendix: Some Matrix Theory. Bibliography. Index.",
"Estimating the eigenvalues of a population covariance matrix from a sample covariance matrix is a problem of fundamental importance in multivariate statistics; the eigenvalues of covariance matrices play a key role in many widely used techniques, in particular in principal component analysis (PCA). In many modem data analysis problems, statisticians are faced with large datasets where the sample size, n, is of the same order of magnitude as the number of variables p. Random matrix theory predicts that in this context, the eigenvalues of the sample covariance matrix are not good estimators of the eigenvalues of the population covariance. We propose to use a fundamental result in random matrix theory, the Marcenko-Pastur equation, to better estimate the eigenvalues of large dimensional covariance matrices. The Marcenko-Pastur equation holds in very wide generality and under weak assumptions. The estimator we obtain can be thought of as \"shrinking\" in a nonlinear fashion the eigenvalues of the sample covariance matrix to estimate the population eigenvalues. Inspired by ideas of random matrix theory, we also suggest a change of point of view when thinking about estimation of high-dimensional vectors: we do not try to estimate directly the vectors but rather a probability measure that describes them. We think this is a theoretically more fruitful way to think about these problems. Our estimator gives fast and good or very good results in extended simulations. Our algorithmic approach is based on convex optimization. We also show that the proposed estimator is consistent.",
"A reactor and the primary winding of an ignition transformer are connected in series across a source of direct current, the secondary winding of the ignition transformer is connected across an ignition plug and an interrupter is connected in parallel with the primary winding.",
"All physical systems in equilibrium obey the laws of thermodynamics. In other words, whatever the precise nature of the interaction between the atoms and molecules at the microscopic level, at the macroscopic level, physical systems exhibit universal behavior in the sense that they are all governed by the same laws and formulae of thermodynamics. In this paper we describe some recent history of universality ideas in physics starting with Wigner�s model for the scattering of neutrons off large nuclei and show how these ideas have led mathematicians to investigate universal behavior for a variety of mathematical systems. This is true not only for systems which have a physical origin, but also for systems which arise in a purely mathematical context such as the Riemann hypothesis, and a version of the card game solitaire called patience sorting.",
"",
"Abstract We review the development of random-matrix theory (RMT) during the last fifteen years. We emphasize both the theoretical aspects, and the application of the theory to a number of fields. These comprise chaotic and disordered systems, the localization problem, many-body quantum systems, the Calogero-Sutherland model, chiral symmetry breaking in QCD, and quantum gravity in two dimensions. The review is preceded by a brief historical survey of the developments of RMT and of localization theory since their inception. We emphasize the concepts common to the above-mentioned fields as well as the great diversity of RMT. In view of the universality of RMT, we suggest that the current development signals the emergence of a new “statistical mechanics”: Stochasticity and general symmetry requirements lead to universal laws not based on dynamical principles.",
"In the last few years, the asymptotic distribution of the singular values of certain random matrices has emerged as a key tool in the analysis and design of wireless communication channels. These channels are characterized by random matrices that admit various statistical descriptions depending on the actual application. The goal of this paper is the investigation and application of random matrix theory with particular emphasis on the asymptotic theorems on the distribution of the squared singular values under various assumption on the joint distribution of the random matrix entries.",
"Let x (1) denote the square of the largest singular value of an n x p matrix X, all of whose entries are independent standard Gaussian variates. Equivalently, x (1) is the largest principal component variance of the covariance matrix X'X, or the largest eigenvalue of a p-variate Wishart distribution on n degrees of freedom with identity covariance. Consider the limit of large p and n with n p = y ≥ 1. When centered by μ p = (√n-1 + √p) 2 and scaled by σ p = (√n-1 + √p)(1 √n-1 + 1 √p) 1 3 , the distribution of x (1) approaches the Tracy-Widom law of order 1, which is defined in terms of the Painleve II differential equation and can be numerically evaluated and tabulated in software. Simulations show the approximation to be informative for n and p as small as 5. The limit is derived via a corresponding result for complex Wishart matrices using methods from random matrix theory. The result suggests that some aspects of large p multivariate distribution theory may be easier to apply in practice than their fixed p counterparts."
]
}
|
1104.4513
|
1595554894
|
This work introduces the minimax Laplace transform method, a modification of the cumulant-based matrix Laplace transform method developed in [Tro11c] that yields both upper and lower bounds on each eigenvalue of a sum of random self-adjoint matrices. This machinery is used to derive eigenvalue analogs of the classical Chernoff, Bennett, and Bernstein bounds. Two examples demonstrate the efficacy of the minimax Laplace transform. The first concerns the effects of column sparsification on the spectrum of a matrix with orthonormal rows. Here, the behavior of the singular values can be described in terms of coherence-like quantities. The second example addresses the question of relative accuracy in the estimation of eigenvalues of the covariance matrix of a random process. Standard results on the convergence of sample covariance matrices provide bounds on the number of samples needed to obtain relative accuracy in the spectral norm, but these results only guarantee relative accuracy in the estimate of the maximum eigenvalue. The minimax Laplace transform argument establishes that if the lowest eigenvalues decay sufficiently fast, Ω(e^(-2)κ^2_l l log p) samples, where κ_l = λ_1(C) λ_l(C), are sufficient to ensure that the dominant l eigenvalues of the covariance matrix of a N(0,C) random vector are estimated to within a factor of 1 ± e with high probability.
|
The fundamental object of study in asymptotic random matrix theory is the empirical spectral distribution function (ESD). Given a random Hermitian matrix @math of order @math , its ESD [ F^ A (x) = 1 n # 1 i n : ( A ) x ] is a random distribution function which encodes the statistics of the spectrum of @math Wigner's theorem @cite_63 , the seminal result of the asymptotic theory, establishes that if @math is a sequence of independent, symmetric @math matrices with i.i.d. @math entries on and above the diagonal, then the expected ESD of @math converges weakly in probability, as @math approaches infinity, to the semicircular law given by [ F(x) = 1 2 - ^x 4-y^2 , 1 _ [-2,2] (y) , d y. ] Thus, at least in the limiting sense, the spectra of these random matrices are well characterized. Development of the classical asymptotic theory has been driven by the natural question raised by Wigner's result: to what extent is the semicircular law, and more generally, the existence of a limiting spectral distribution (LSD) universal?
|
{
"cite_N": [
"@cite_63"
],
"mid": [
"2043905980"
],
"abstract": [
"The statistical properties of the characteristic values of a matrix the elements of which show a normal (Gaussian) distribution are well known (cf. [6] Chapter XI) and have been derived, rather recently, in a particularly elegant fashion.1 The present problem arose from the consideration of the properties of the wave functions of quantum mechanical systems which are assumed to be so complicated that statistical considerations can be applied to them. Since the physical problem has been given rather recently in some detail in another journal [3], it will not be reviewed here. Actually, the model which underlies the present calculations shows only a limited similarity to the model which is believed to be correct. Nevertheless, the calculation which follows may have some independent interest; it certainly provided the encouragement for a detailed investigation of the model which may reproduce some features of the actual behavior of atomic nuclei."
]
}
|
1104.4513
|
1595554894
|
This work introduces the minimax Laplace transform method, a modification of the cumulant-based matrix Laplace transform method developed in [Tro11c] that yields both upper and lower bounds on each eigenvalue of a sum of random self-adjoint matrices. This machinery is used to derive eigenvalue analogs of the classical Chernoff, Bennett, and Bernstein bounds. Two examples demonstrate the efficacy of the minimax Laplace transform. The first concerns the effects of column sparsification on the spectrum of a matrix with orthonormal rows. Here, the behavior of the singular values can be described in terms of coherence-like quantities. The second example addresses the question of relative accuracy in the estimation of eigenvalues of the covariance matrix of a random process. Standard results on the convergence of sample covariance matrices provide bounds on the number of samples needed to obtain relative accuracy in the spectral norm, but these results only guarantee relative accuracy in the estimate of the maximum eigenvalue. The minimax Laplace transform argument establishes that if the lowest eigenvalues decay sufficiently fast, Ω(e^(-2)κ^2_l l log p) samples, where κ_l = λ_1(C) λ_l(C), are sufficient to ensure that the dominant l eigenvalues of the covariance matrix of a N(0,C) random vector are estimated to within a factor of 1 ± e with high probability.
|
The literature on the existence and universality of LSDs is massive; we mention only the highlights. It is now known that the semicircular law is universal for Wigner matrices. Suppose that @math is a sequence of independent @math Wigner matrices. Grenander established that if all the moments are finite, then the ESD of @math converges weakly to the semicircular law in probability @cite_73 . Arnold showed that, assuming a finite fourth moment, the ESD almost surely converges weakly to the semicircular law @cite_40 . Around the same time, Mar c enko and Pastur determined the form of the limiting spectral distribution of sample covariance matrices @cite_25 . More recently, Tao and Vu confirmed the long-conjectured circular law hypothesis. Let @math be a sequence of independent @math matrices whose entries are i.i.d. and have unit variance. Then the ESD of @math converges weakly to the uniform measure on the unit disk, both in probability and almost surely @cite_17 .
|
{
"cite_N": [
"@cite_40",
"@cite_73",
"@cite_25",
"@cite_17"
],
"mid": [
"2105077595",
"2800709898",
"2060581589",
"2063361547"
],
"abstract": [
"A process and an apparatus for the continuous ozonization of unsaturated organic compounds in the presence of water, wherein (a) the fresh charge mixture of unsaturated compounds to be ozonized is reacted in at least one first reactor in parallel flow with ozonic gas, which was previously used for ozonizing a previously partially ozonized charge mixture, and (b) simultaneously a previously partially ozonized charge mixture is reacted in at least one second reactor in parallel flow with fresh ozonic gas, and wherein, optionally, said at least two reactors contain at least two mixing sections for mixing the charged liquid and gaseous phases with each other where the hydraulic diameter of the mixing elements in the individual mixing section is reduced in the flow direction.",
"",
"In this paper we study the distribution of eigenvalues for two sets of random Hermitian matrices and one set of random unitary matrices. The statement of the problem as well as its method of investigation go back originally to the work of Dyson [i] and I. M. Lifsic [2], [3] on the energy spectra of disordered systems, although in their probability character our sets are more similar to sets studied by Wigner [4]. Since the approaches to the sets we consider are the same, we present in detail only the most typical case. The corresponding results for the other two cases are presented without proof in the last section of the paper. §1. Statement of the problem and survey of results We shall consider as acting in iV-dimensiona l unitary space v, a selfadjoint operator BN (re) of the form",
"Given an n x n complex matrix A, let mu(A)(x, y) := 1 n vertical bar 1 <= i <= n, Re lambda(i) <= x, Im lambda(i) <= y vertical bar be the empirical spectral distribution (ESD) of its eigenvalues lambda(i) is an element of C, i = l, ... , n. We consider the limiting distribution (both in probability and in the almost sure convergence sense) of the normalized ESD mu(1 root n An) of a random matrix A(n) = (a(ij))(1 <= i, j <= n), where the random variables a(ij) - E(a(ij)) are i.i.d. copies of a fixed random variable x with unit variance. We prove a universality principle for such ensembles, namely, that the limit distribution in question is independent of the actual choice of x. In particular, in order to compute this distribution, one can assume that x is real or complex Gaussian. As a related result, we show how laws for this ESD follow from laws for the singular value distribution of 1 root n A(n) - zI for complex z. As a corollary, we establish the circular law conjecture (both almost surely and in probability), which asserts that mu(1 root n An) converges to the uniform measure on the unit disc when the a(ij) have zero mean."
]
}
|
1104.4513
|
1595554894
|
This work introduces the minimax Laplace transform method, a modification of the cumulant-based matrix Laplace transform method developed in [Tro11c] that yields both upper and lower bounds on each eigenvalue of a sum of random self-adjoint matrices. This machinery is used to derive eigenvalue analogs of the classical Chernoff, Bennett, and Bernstein bounds. Two examples demonstrate the efficacy of the minimax Laplace transform. The first concerns the effects of column sparsification on the spectrum of a matrix with orthonormal rows. Here, the behavior of the singular values can be described in terms of coherence-like quantities. The second example addresses the question of relative accuracy in the estimation of eigenvalues of the covariance matrix of a random process. Standard results on the convergence of sample covariance matrices provide bounds on the number of samples needed to obtain relative accuracy in the spectral norm, but these results only guarantee relative accuracy in the estimate of the maximum eigenvalue. The minimax Laplace transform argument establishes that if the lowest eigenvalues decay sufficiently fast, Ω(e^(-2)κ^2_l l log p) samples, where κ_l = λ_1(C) λ_l(C), are sufficient to ensure that the dominant l eigenvalues of the covariance matrix of a N(0,C) random vector are estimated to within a factor of 1 ± e with high probability.
|
As is the case in the asymptotic theory, the sharpest and most comprehensive results available in the nonasymptotic theory concern the behavior of Gaussian matrices. The amenability of the Gaussian distribution makes it possible to obtain results such as Szarek's nonasymptotic analog of the Wigner semicircle theorem for Gaussian matrices @cite_65 and Chen and Dongarra's bounds on the condition number of Gaussian matrices @cite_70 . The properties of less well-behaved random matrices can sometimes be related back to those of Gaussian matrices using probabilistic tools, such as symmetrization; see, e.g., the derivation of Lata a's bound on the norms of zero-mean random matrices @cite_30 .
|
{
"cite_N": [
"@cite_70",
"@cite_65",
"@cite_30"
],
"mid": [
"2130076109",
"2315557445",
"2031213242"
],
"abstract": [
"Let @math be an @math real random matrix whose elements are independent and identically distributed standard normal random variables, and let @math be the 2-norm condition number of @math . We prove that, for any @math , @math , and @math , @math satisfies @math where @math and @math @math are universal positive constants independent of @math , @math , and @math . Moreover, for any @math and @math , @math A similar pair of results for complex Gaussian random matrices is also established.",
"",
"We show that for any random matrix (X ij ) with independent mean zero entries E∥(X ij )∥ ≤ C(max √ΣEX 2 ij + max √ΣEX 2 ij + 4√ΣEX 4 ij ), where C is some universal constant."
]
}
|
1104.4513
|
1595554894
|
This work introduces the minimax Laplace transform method, a modification of the cumulant-based matrix Laplace transform method developed in [Tro11c] that yields both upper and lower bounds on each eigenvalue of a sum of random self-adjoint matrices. This machinery is used to derive eigenvalue analogs of the classical Chernoff, Bennett, and Bernstein bounds. Two examples demonstrate the efficacy of the minimax Laplace transform. The first concerns the effects of column sparsification on the spectrum of a matrix with orthonormal rows. Here, the behavior of the singular values can be described in terms of coherence-like quantities. The second example addresses the question of relative accuracy in the estimation of eigenvalues of the covariance matrix of a random process. Standard results on the convergence of sample covariance matrices provide bounds on the number of samples needed to obtain relative accuracy in the spectral norm, but these results only guarantee relative accuracy in the estimate of the maximum eigenvalue. The minimax Laplace transform argument establishes that if the lowest eigenvalues decay sufficiently fast, Ω(e^(-2)κ^2_l l log p) samples, where κ_l = λ_1(C) λ_l(C), are sufficient to ensure that the dominant l eigenvalues of the covariance matrix of a N(0,C) random vector are estimated to within a factor of 1 ± e with high probability.
|
More generally, bounds on extremal eigenvalues can be obtained from knowledge of the moments of the entries. For example, the smallest singular value of a square matrix with i.i.d. zero-mean subgaussian entries with unit variance is O( @math ) with high probability @cite_74 . Concentration of measure results, such as Talagrand's concentration inequality for product spaces @cite_19 , have also contributed greatly to the nonasymptotic theory. We mention in particular the work of Achlioptas and McSherry on randomized sparsification of matrices @cite_76 @cite_77 , that of Meckes on the norms of random matrices @cite_35 , and that of Alon, Krivelevich and Vu @cite_46 on the concentration of the largest eigenvalues of random symmetric matrices, all of which are applications of Talagrand's inequality. In cases where geometric information on the distribution of the random matrices is available, the tools of empirical process theory---such as the generic chaining, also due to Talagrand @cite_78 ---can be used to convert this geometric information into information on the spectra. One natural example of such a case consists of matrices whose rows are independently drawn from a log-concave distribution @cite_31 @cite_66 .
|
{
"cite_N": [
"@cite_35",
"@cite_31",
"@cite_78",
"@cite_19",
"@cite_77",
"@cite_74",
"@cite_46",
"@cite_76",
"@cite_66"
],
"mid": [
"2074916818",
"2047772840",
"586055246",
"1973286131",
"1970950689",
"",
"2061109699",
"",
"2962867733"
],
"abstract": [
"We prove concentration results for lpn operator norms of rectangular random matrices and eigenvalues of self-adjoint random matrices. The random matrices we consider have bounded entries which are independent, up to a possible self-adjointness constraint. Our results are based on an isoperimetric inequality for product spaces due to Talagrand.",
"We present deviation inequalities of random operators of the form 1 N ∑N i=1 Xi ⊗ Xi from the average operator E(X ⊗ X), where Xi are independent random vectors distributed as X, which is a random vector in R or in 2. We use these inequalities to estimate the singular values of random matrices with independent rows (without assuming that the entries are independent).",
"Overview and Basic Facts.- Gaussian Processes and Related Structures.- Matching Theorems.- The Bernoulli Conjecture.- Families of distances.- Applications to Banach Space Theory.",
"The concentration of measure phenomenon in product spaces roughly states that, if a set A in a product ΩN of probability spaces has measure at least one half, “most” of the points of Ωn are “close” to A. We proceed to a systematic exploration of this phenomenon. The meaning of the word “most” is made rigorous by isoperimetrictype inequalities that bound the measure of the exceptional sets. The meaning of the work “close” is defined in three main ways, each of them giving rise to related, but different inequalities. The inequalities are all proved through a common scheme of proof. Remarkably, this simple approach not only yields qualitatively optimal results, but, in many cases, captures near optimal numerical constants. A large number of applications are given, in particular to Percolation, Geometric Probability, Probability in Banach Spaces, to demonstrate in concrete situations the extremely wide range of application of the abstract tools.",
"Given a matrix A, it is often desirable to find a good approximation to A that has low rank. We introduce a simple technique for accelerating the computation of such approximations when A has strong spectral features, that is, when the singular values of interest are significantly greater than those of a random matrix with size and entries similar to A. Our technique amounts to independently sampling and or quantizing the entries of A, thus speeding up computation by reducing the number of nonzero entries and or the length of their representation. Our analysis is based on observing that the acts of sampling and quantization can be viewed as adding a random matrix N to A, whose entries are independent random variables with zero-mean and bounded variance. Since, with high probability, N has very weak spectral features, we can prove that the effect of sampling and quantization nearly vanishes when a low-rank approximation to A p N is computed. We give high probability bounds on the quality of our approximation both in the Frobenius and the 2-norm.",
"",
"It is shown that for every 1≤s≤n, the probability that thes-th largest eigenvalue of a random symmetricn-by-n matrix with independent random entries of absolute value at most 1 deviates from its median by more thant is at most 4e − t 232 s2. The main ingredient in the proof is Talagrand’s Inequality for concentration of measure in product spaces.",
"",
"Abstract Let X 1 , … , X N ∈ R n be independent centered random vectors with log-concave distribution and with the identity as covariance matrix. We show that with overwhelming probability one has sup x ∈ S n − 1 | 1 N ∑ i = 1 N ( | 〈 X i , x 〉 | 2 − E | 〈 X i , x 〉 | 2 ) | ⩽ C n N , where C is an absolute positive constant. This result is valid in a more general framework when the linear forms ( 〈 X i , x 〉 ) i ⩽ N , x ∈ S n − 1 and the Euclidean norms ( | X i | n ) i ⩽ N exhibit uniformly a sub-exponential decay. As a consequence, if A denotes the random matrix with columns ( X i ) , then with overwhelming probability, the extremal singular values λ min and λ max of A A ⊤ satisfy the inequalities 1 − C n N ⩽ λ min N ⩽ λ max N ⩽ 1 + C n N which is a quantitative version of Bai–Yin theorem (Z.D. Bai, Y.Q. Yin, 1993 [4] ) known for random matrices with i.i.d. entries."
]
}
|
1104.4513
|
1595554894
|
This work introduces the minimax Laplace transform method, a modification of the cumulant-based matrix Laplace transform method developed in [Tro11c] that yields both upper and lower bounds on each eigenvalue of a sum of random self-adjoint matrices. This machinery is used to derive eigenvalue analogs of the classical Chernoff, Bennett, and Bernstein bounds. Two examples demonstrate the efficacy of the minimax Laplace transform. The first concerns the effects of column sparsification on the spectrum of a matrix with orthonormal rows. Here, the behavior of the singular values can be described in terms of coherence-like quantities. The second example addresses the question of relative accuracy in the estimation of eigenvalues of the covariance matrix of a random process. Standard results on the convergence of sample covariance matrices provide bounds on the number of samples needed to obtain relative accuracy in the spectral norm, but these results only guarantee relative accuracy in the estimate of the maximum eigenvalue. The minimax Laplace transform argument establishes that if the lowest eigenvalues decay sufficiently fast, Ω(e^(-2)κ^2_l l log p) samples, where κ_l = λ_1(C) λ_l(C), are sufficient to ensure that the dominant l eigenvalues of the covariance matrix of a N(0,C) random vector are estimated to within a factor of 1 ± e with high probability.
|
Major contributions to the literature on matrix probability inequalities include the papers @cite_32 @cite_10 @cite_39 . We emphasize two works of Oliveira @cite_34 @cite_58 that go well beyond earlier research. The sharpest current results appear in the works of Tropp @cite_37 @cite_13 @cite_57 . Recently, Hsu, Kakade, and Zhang @cite_54 have modified Tropp's approach to establish matrix probability inequalities that depend on an intrinsic dimension parameter, rather than the ambient dimension.
|
{
"cite_N": [
"@cite_13",
"@cite_37",
"@cite_54",
"@cite_32",
"@cite_34",
"@cite_39",
"@cite_57",
"@cite_58",
"@cite_10"
],
"mid": [
"",
"2107411554",
"1513076946",
"2030626224",
"1789701990",
"2166163936",
"2137798267",
"",
"2120872934"
],
"abstract": [
"",
"This paper presents new probability inequalities for sums of independent, random, self-adjoint matrices. These results place simple and easily verifiable hypotheses on the summands, and they deliver strong conclusions about the large-deviation behavior of the maximum eigenvalue of the sum. Tail bounds for the norm of a sum of random rectangular matrices follow as an immediate corollary. The proof techniques also yield some information about matrix-valued martingales. In other words, this paper provides noncommutative generalizations of the classical bounds associated with the names Azuma, Bennett, Bernstein, Chernoff, Hoeffding, and McDiarmid. The matrix inequalities promise the same diversity of application, ease of use, and strength of conclusion that have made the scalar inequalities so valuable.",
"We derive exponential tail inequalities for sums of random matrices with no dependence on the explicit matrix dimensions. These are similar to the matrix versions of the Chernoff bound and Bernstein inequality except with the explicit matrix dimensions replaced by a trace quantity that can be small even when the dimension is large or infinite. Some applications to principal component analysis and approximate matrix multiplication are given to illustrate the utility of the new bounds.",
"The AlonRoichman theorem states that for every e> 0 there is a constant c(e), such that the Cayley graph of a finite group G with respect to c(e)log |G| elements of G, chosen independently and uniformly at random, has expected second largest eigenvalue less than e. In particular, such a graph is an expander with high probability. Landau and Russell, and independently Loh and Schulman, improved the bounds of the theorem. Following Landau and Russell we give a new proof of the result, improving the bounds even further. When considered for a general group G, our bounds are in a sense best possible. We also give a generalization of the AlonRoichman theorem to random coset graphs. Our proof uses a Hoeffding-type result for operator valued random variables, which we believe can be of independent interest. © 2007 Wiley Periodicals, Inc. Random Struct. Alg., 2008",
"Consider any random graph model where potential edges appear independently, with possibly different probabilities, and assume that the minimum expected degree is !(lnn). We prove that the adjacency matrix and the Laplacian of that random graph are concentrated around the corresponding matrices of the weighted graph whose edge weights are the probabilities in the random model. We apply this result to two different settings. In bond percolation, we show that, whenever the minimum expected degree in the random model is not too small, the Laplacian of the percolated graph is typically close to that of the original graph. As a corollary, we improve upon a bound for the spectral gap of the percolated graph due to Chung and Horn.",
"We present novel techniques for analyzing the problem of low-rank matrix recovery. The methods are both considerably simpler and more general than previous approaches. It is shown that an unknown matrix of rank can be efficiently reconstructed from only randomly sampled expansion coefficients with respect to any given matrix basis. The number quantifies the “degree of incoherence” between the unknown matrix and the basis. Existing work concentrated mostly on the problem of “matrix completion” where one aims to recover a low-rank matrix from randomly selected matrix elements. Our result covers this situation as a special case. The proof consists of a series of relatively elementary steps, which stands in contrast to the highly involved methods previously employed to obtain comparable results. In cases where bounds had been known before, our estimates are slightly tighter. We discuss operator bases which are incoherent to all low-rank matrices simultaneously. For these bases, we show that randomly sampled expansion coefficients suffice to recover any low-rank matrix with high probability. The latter bound is tight up to multiplicative constants.",
"Freedman's inequality is a martingale counterpart to Bernstein's inequality. This result shows that the large-deviation behavior of a martingale is controlled by the predictable quadratic variation and a uniform upper bound for the martingale difference sequence. Oliveira has recently established a natural extension of Freedman's inequality that provides tail bounds for the maximum singular value of a matrix-valued martingale. This note describes a different proof of the matrix Freedman inequality that depends on a deep theorem of Lieb from matrix analysis. This argument delivers sharp constants in the matrix Freedman inequality, and it also yields tail bounds for other types of matrix martingales. The new techniques are adapted from recent work by the present author.",
"",
"This paper provides the best bounds to date on the number of randomly sampled entries required to reconstruct an unknown low-rank matrix. These results improve on prior work by Candes and Recht (2009), Candes and Tao (2009), and (2009). The reconstruction is accomplished by minimizing the nuclear norm, or sum of the singular values, of the hidden matrix subject to agreement with the provided entries. If the underlying matrix satisfies a certain incoherence condition, then the number of entries required is equal to a quadratic logarithmic factor times the number of parameters in the singular value decomposition. The proof of this assertion is short, self contained, and uses very elementary analysis. The novel techniques herein are based on recent work in quantum information theory."
]
}
|
1104.3913
|
2949135123
|
We study fairness in classification, where individuals are classified, e.g., admitted to a university, and the goal is to prevent discrimination against individuals based on their membership in some group, while maintaining utility for the classifier (the university). The main conceptual contribution of this paper is a framework for fair classification comprising (1) a (hypothetical) task-specific metric for determining the degree to which individuals are similar with respect to the classification task at hand; (2) an algorithm for maximizing utility subject to the fairness constraint, that similar individuals are treated similarly. We also present an adaptation of our approach to achieve the complementary goal of "fair affirmative action," which guarantees statistical parity (i.e., the demographics of the set of individuals receiving any classification are the same as the demographics of the underlying population), while treating similar individuals as similarly as possible. Finally, we discuss the relationship of fairness to privacy: when fairness implies privacy, and how tools developed in the context of differential privacy may be applied to fairness.
|
Concerns for fairness'' also arise in many contexts in computer science, game theory, and economics. For example, in the distributed computing literature, one meaning of fairness is that a process that attempts infinitely often to make progress eventually makes progress. One quantitative meaning of unfairness in scheduling theory is the maximum, taken over all members of a set of long-lived processes, of the difference between the actual load on the process and the so-called desired load (the desired load is a function of the tasks in which the process participates) @cite_15 ; other notions of fairness appear in @cite_8 @cite_1 @cite_12 , to name a few. For an example of work incorporating fairness into game theory and economics see the eponymous paper @cite_3 .
|
{
"cite_N": [
"@cite_8",
"@cite_1",
"@cite_3",
"@cite_15",
"@cite_12"
],
"mid": [
"1978593916",
"1992678640",
"1599656298",
"1964238630",
"2127508398"
],
"abstract": [
"We consider the following problem: The Santa Claus has n presents that he wants to distribute among m kids. Each kid has an arbitrary value for each present. Let pij be the value that kid i has for present j. The Santa's goal is to distribute presents in such a way that the least lucky kid is as happy as possible, i.e he tries to maximize mini=1,...,m sumj ∈ Si pij where Si is a set of presents received by the i-th kid.Our main result is an O(log log m log log log m) approximation algorithm for the restricted assignment case of the problem when pij ∈ pj,0 (i.e. when present j has either value pj or 0 for each kid). Our algorithm is based on rounding a certain natural exponentially large linear programming relaxation usually referred to as the configuration LP. We also show that the configuration LP has an integrality gap of Ω(m1 2) in the general case, when pij can be arbitrary.",
"We consider a problem known as the restricted assignment version of the max-min allocation problem with indivisible goods. There are n items of various nonnegative values and m players. Every player is interested only in some of the items and has zero value for the other items. One has to distribute the items among the players in a way that maximizes a certain notion of fairness, namely, maximizes the minimum of the sum of values of items given to any player. Bansal and Sviridenko [STOC 2006] describe a linear programming relaxation for this problem, and present a rounding technique that recovers an allocation of value at least Ω(log log log m log log m) of the optimum. We show that the value of this LP relaxation in fact approximates the optimum value to within a constant factor. Our proof is not constructive and does not by itself provide an efficient algorithm for finding an allocation that is within constant factors of optimal.",
"People like to help those who are helping them and to hurt those who are hurting them. Outcomes rejecting such motivations are called fairness equilibria. Outcomes are mutual-max when each person maximizes the other's material payoffs, and mutual-min when each person minimizes the other's payoffs. It is shown that every mutual-max or mutual-min Nash equilibrium is a fairness equilibrium. If payoffs are small, fairness equilibria are roughly the set of mutual-max and mutual-min outcomes; if payoffs are large, fairness equilibria are roughly the set of Nash equilibria. Several economic examples are considered and possible welfare implications of fairness are explored. Copyright 1993 by American Economic Association.",
"On-line machine scheduling has been studied extensively, but the fundamental issue of fairness in scheduling is still mostly open. In this paper we explore the issue in settings where there are long-lived processes which should be repeatedly scheduled for various tasks throughout the lifetime of a system. For any such instance we develop a notion ofdesiredload of a process, which is a function of the tasks it participates in. Theunfairnessof a system is the maximum, taken over all processes, of the difference between the desired load and the actual load.An example of such a setting is thecarpool problemsuggested by Fagin and Williams IBM Journal of Research and Development27(2) (1983), 133?139]. In this problem, a set ofnpeople form a carpool. On each day a subset of the people arrive and one of them is designated as the driver. A scheduling rule is required so that the driver will be determined in a “fair” way.We investigate this problem under various assumptions on the input distribution. We also show that the carpool problems can capture several other problems of fairness in scheduling.",
"We consider a task of scheduling with a common deadline on a single machine. Every player reports to a scheduler the length of his job and the scheduler needs to finish as many jobs as possible by the deadline. For this simple problem, there is a truthful mechanism that achieves maximum welfare in dominant strategies. The new aspect of our work is that in our setting players are uncertain about their own job lengths, and hence are incapable of providing truthful reports (in the strict sense of the word). For a probabilistic model for uncertainty we show that even with relatively little uncertainty, no mechanism can guarantee a constant fraction of the maximum welfare. To remedy this situation, we introduce a new measure of economic efficiency, based on a notion of a fair share of a player, and design mechanisms that are Ω(1)-fair. In addition to its intrinsic appeal, our notion of fairness implies good approximation of maximum welfare in several cases of interest. In our mechanisms the machine is sometimes left idle even though there are jobs that want to use it. We show that this unfavorable aspect is unavoidable, unless one gives up other favorable aspects (e.g., give up Ω(1)-fairness). We also consider a qualitative approach to uncertainty as an alternative to the probabilistic quantitative model. In the qualitative approach we break away from solution concepts such as dominant strategies (they are no longer well defined), and instead suggest an axiomatic approach, which amounts to listing desirable properties for mechanisms. We provide a mechanism that satisfies these properties."
]
}
|
1104.4601
|
1823091629
|
We report on the Gaussian file search system designed as part of the ChemXSeer digital library. Gaussian files are produced by the Gaussian software [4], a software package used for calculating molecular electronic structure and properties. The output files are semi-structured, allowing relatively easy access to the Gaussian attributes and metadata. Our system is currently capable of searching Gaussian documents using a boolean combination of atoms (chemical elements) and attributes. We have also implemented a faceted browsing feature on three important Gaussian attribute types - Basis Set, Job Type and Method Used. The faceted browsing feature enables a user to view and process a smaller, filtered subset of documents.
|
In this section we give a brief sketch of the related work. The importance of using large databases to support chemistry calculations has been illustrated by Feller in @cite_8 . Schuchardt, et al, describe such a database, the Basis Set Exchange @cite_2 . Basis Set Exchange helps users find particular basis sets that work on certain collections of atoms, while ChemXSeer lets users search Gaussian files with basis sets as boolean query components.
|
{
"cite_N": [
"@cite_2",
"@cite_8"
],
"mid": [
"2151220474",
"1996846370"
],
"abstract": [
"Basis sets are some of the most important input data for computational models in the chemistry, materials, biology, and other science domains that utilize computational quantum mechanics methods. Providing a shared, Web-accessible environment where researchers can not only download basis sets in their required format but browse the data, contribute new basis sets, and ultimately curate and manage the data as a community will facilitate growth of this resource and encourage sharing both data and knowledge. We describe the Basis Set Exchange (BSE), a Web portal that provides advanced browsing and download capabilities, facilities for contributing basis set data, and an environment that incorporates tools to foster development and interaction of communities. The BSE leverages and enables continued development of the basis set library originally assembled at the Environmental Molecular Sciences Laboratory.",
"A role for electronic structure databases in assisting users of quantum chemistry applications select better model parameters is discussed in light of experiences gained from a software prototype known as the Computational Chemistry Input Assistant (CCIA). It is argued that the ready availability of information pertaining to the applications and theoretical models can substantially increase the likelihood of novice users obtaining the desired accuracy from their calculations while simultaneously making better use of computer resources. Expert users, who find themselves contemplating studies in new areas of research, may also benefit from the proposed tools. For maximum impact, this assistance should be provided while users are actively engaged in preparing calculations. © 1996 by John Wiley & Sons, Inc."
]
}
|
1104.4601
|
1823091629
|
We report on the Gaussian file search system designed as part of the ChemXSeer digital library. Gaussian files are produced by the Gaussian software [4], a software package used for calculating molecular electronic structure and properties. The output files are semi-structured, allowing relatively easy access to the Gaussian attributes and metadata. Our system is currently capable of searching Gaussian documents using a boolean combination of atoms (chemical elements) and attributes. We have also implemented a faceted browsing feature on three important Gaussian attribute types - Basis Set, Job Type and Method Used. The faceted browsing feature enables a user to view and process a smaller, filtered subset of documents.
|
Among other purely chemistry-domain digital libraries, OREChem ChemXSeer by Li, et al @cite_4 integrates semantic web technology with the basic ChemXSeer framework. The Chemical Education Digital Library @cite_5 and the JCE (Journal of Chemical Education) Digital Library @cite_6 focus on organizing instructional and educational materials in Chemistry. Both these projects are supported by NSF under the National Science Digital Library (NSDL). In contrast with these studies, our focus here is to design a search functionality on Gaussian files that helps domain experts locate attribute information more easily.
|
{
"cite_N": [
"@cite_5",
"@cite_4",
"@cite_6"
],
"mid": [
"2013948691",
"1966541562",
""
],
"abstract": [
"The ChemEd Digital Library is one of eleven Pathways projects in the National Science Digital Library, a major project supported by the NSF to improve science education nationwide, that aims to provide broad access to a rich, reliable, and authoritative collection of materials.",
"Representing the semantics of unstructured scientific publications will certainly facilitate access and search and hopefully lead to new discoveries. However, current digital libraries are usually limited to classic flat structured metadata even for scientific publications that potentially contain rich semantic metadata. In addition, how to search the scientific literature of linked semantic metadata is an open problem. We have developed a semantic digital library oreChem ChemxSeer that models chemistry papers with semantic metadata. It stores and indexes extracted metadata from a chemistry paper repository Chemx Seer using \"compound objects\". We use the Open Archives Initiative Object Reuse and Exchange (OAI-ORE) (http: www.openarchives.org ore standard to define a compound object that aggregates metadata fields related to a digital object. Aggregated metadata can be managed and retrieved easily as one unit resulting in improved ease-of-use and has the potential to improve the semantic interpretation of shared data. We show how metadata can be extracted from documents and aggregated using OAI-ORE. ORE objects are created on demand; thus, we are able to search for a set of linked metadata with one query. We were also able to model new types of metadata easily. For example, chemists are especially interested in finding information related to experiments in documents. We show how paragraphs containing experiment information in chemistry papers can be extracted and tagged based on a chemistry ontology with 470 classes, and then represented in ORE along with other document-related metadata. Our algorithm uses a classifier with features that are words that are typically only used to describe experiments, such as \"apparatus\", \"prepare\", etc. Using a dataset comprised of documents from the Royal Society of Chemistry digital library, we show that the our proposed methodperforms well in extracting experiment-related paragraphs from chemistry documents.",
""
]
}
|
1104.2944
|
2949310786
|
In this paper, we study the question of how efficiently a collection of interconnected nodes can perform a global computation in the widely studied GOSSIP model of communication. In this model, nodes do not know the global topology of the network, and they may only initiate contact with a single neighbor in each round. This model contrasts with the much less restrictive LOCAL model, where a node may simultaneously communicate with all of its neighbors in a single round. A basic question in this setting is how many rounds of communication are required for the information dissemination problem, in which each node has some piece of information and is required to collect all others. In this paper, we give an algorithm that solves the information dissemination problem in at most @math rounds in a network of diameter @math , withno dependence on the conductance. This is at most an additive polylogarithmic factor from the trivial lower bound of @math , which applies even in the LOCAL model. In fact, we prove that something stronger is true: any algorithm that requires @math rounds in the LOCAL model can be simulated in @math rounds in the GOSSIP model. We thus prove that these two models of distributed computation are essentially equivalent.
|
The problem of spreading information in a distributed system was introduced by @cite_25 for the purpose of replicated database maintenance, and it has been extensively studied thereafter.
|
{
"cite_N": [
"@cite_25"
],
"mid": [
"2038562061"
],
"abstract": [
"Whru a dilt lhSC is replicated at, many sites2 maintaining mutual consistrnry among t,he sites iu the fac:e of updat,es is a signitirant problem. This paper descrikrs several randomized algorit,hms for dist,rihut.ing updates and driving t,he replicas toward consist,c>nc,y. The algorit Inns are very simple and require few guarant,ees from the underlying conllllunicat.ioll system, yc+ they rnsutc t.hat. the off( c t, of (‘very update is evcnt,uwlly rf+irt-ted in a11 rq1ica.s. The cost, and parformancc of t,hr algorithms arc tuned I>? c oosing appropriat,c dist,rilMions in t,hc randoinizat,ioii step. TIN> idgoritlmls ilr(’ c*los *ly analogoIls t,o epidemics, and t,he epidcWliolog)litc , ilitlh iii Illld rsti4lldill tlicir bc*liavior. One of tlW i$,oritlims 11&S brc>n implrmcWrd in the Clraringhousr sprv(brs of thr Xerox C’orporat c Iiitcrnc4, solviiig long-standing prol>lf lns of high traffic and tlatirl>ilsr inconsistcllcp."
]
}
|
1104.2944
|
2949310786
|
In this paper, we study the question of how efficiently a collection of interconnected nodes can perform a global computation in the widely studied GOSSIP model of communication. In this model, nodes do not know the global topology of the network, and they may only initiate contact with a single neighbor in each round. This model contrasts with the much less restrictive LOCAL model, where a node may simultaneously communicate with all of its neighbors in a single round. A basic question in this setting is how many rounds of communication are required for the information dissemination problem, in which each node has some piece of information and is required to collect all others. In this paper, we give an algorithm that solves the information dissemination problem in at most @math rounds in a network of diameter @math , withno dependence on the conductance. This is at most an additive polylogarithmic factor from the trivial lower bound of @math , which applies even in the LOCAL model. In fact, we prove that something stronger is true: any algorithm that requires @math rounds in the LOCAL model can be simulated in @math rounds in the GOSSIP model. We thus prove that these two models of distributed computation are essentially equivalent.
|
One fundamental property of the distributed system that affects the number of rounds required for information spreading is the communication model. The model was introduced by @cite_1 , allowing every node to contact one other node in each round. In our setting, this corresponds to the complete graph. This model alone received much attention, such as in bounding the number of calls @cite_7 , bounding the number of random bits used @cite_8 , bounding the number of bits @cite_11 , and more.
|
{
"cite_N": [
"@cite_11",
"@cite_1",
"@cite_7",
"@cite_8"
],
"mid": [
"2068749247",
"2157004711",
"1865724022",
"2009267577"
],
"abstract": [
"We study the communication complexity of rumor spreading in the random phone-call model. Suppose nplayers communicate in parallel rounds, where in each round every player calls a randomly selected communication partner. A player u is allowed to exchange messages during a round only with the player that u called, and with all the players that @math received calls from, in that round. In every round, a (possibly empty) set of rumors to be distributed among all players is generated, and each of the rumors is initially placed in a subset of the players. Karp et. al Karp2000 showed that no rumor-spreading algorithm that spreads a rumor to all players with constant probability can be both time-optimal, taking O(lg n) rounds, and message-optimal, using O(n) messages per rumor. For address-oblivious algorithms, in particular, they showed that Ω(n lg lg n) messages per rumor are required, and they described an algorithm that matches this bound and takes O(lg n) rounds. We investigate the number of communication bits required for rumor spreading. On the lower-bound side, we establish that any address-oblivious algorithm taking O(lg n) rounds requires Ω(n (b+ lg lg n)) communication bits to distribute a rumor of size b bits. On the upper-bound side, we propose an address-oblivious algorithm that takes O(lg n) rounds and uses O(n(b+ lg lg n lg b)) bits. These results show that, unlike the case for the message complexity, optimality in terms of both the running time and the bit communication complexity is attainable, except for very small rumor sizes b",
"Investigates the class of epidemic algorithms that are commonly used for the lazy transmission of updates to distributed copies of a database. These algorithms use a simple randomized communication mechanism to ensure robustness. Suppose n players communicate in parallel rounds in each of which every player calls a randomly selected communication partner. In every round, players can generate rumors (updates) that are to be distributed among all players. Whenever communication is established between two players, each one must decide which of the rumors to transmit. The major problem is that players might not know which rumors their partners have already received. For example, a standard algorithm forwarding each rumor form the calling to the called players for spl Theta (ln n) rounds needs to transmit the rumor spl Theta (n ln n) times in order to ensure that every player finally receives the rumor with high probability. We investigate whether such a large communication overhead is inherent to epidemic algorithms. On the positive side, we show that the communication overhead can be reduced significantly. We give an algorithm using only O(n ln ln n) transmissions and O(ln n) rounds. In addition, we prove the robustness of this algorithm. On the negative side, we show that any address-oblivious algorithm needs to send spl Omega (n ln ln n) messages for each rumor, regardless of the number of rounds. Furthermore, we give a general lower bound showing that time and communication optimality cannot be achieved simultaneously using random phone calls, i.e. every algorithm that distributes a rumor in O(ln n) rounds needs spl omega (n) transmissions.",
"We propose a new protocol for the fundamental problem of disseminating a piece of information to all members of a group of n players. It builds upon the classical randomized rumor spreading protocol and several extensions. The main achievements are the following: Our protocol spreads a rumor from one node to all other nodes in the asymptotically optimal time of (1 + o(1)) log2 n. The whole process can be implemented in a way such that only O(nf(n)) calls are made, where f(n) = ω(1) can be arbitrary. In spite of these quantities being close to the theoretical optima, the protocol remains relatively robust against failures; for random node failures, our algorithm again comes arbitrarily close to the theoretical optima. The protocol can be extended to also deal with adversarial node failures. The price for that is only a constant factor increase in the run-time, where the constant factor depends on the fraction of failing nodes the protocol is supposed to cope with. It can easily be implemented such that only O(n) calls to properly working nodes are made. In contrast to the push-pull protocol by [FOCS 2000], our algorithm only uses push operations, i.e., only informed nodes take active actions in the network. On the other hand, we discard address-obliviousness. To the best of our knowledge, this is the first randomized push algorithm that achieves an asymptotically optimal running time.",
"We investigate the randomness requirements of the classical rumor spreading problem on fully connected graphs with n vertices. In the standard random protocol, where each node that knows the rumor sends it to a randomly chosen neighbor in every round, each node needs O((log n)2) random bits in order to spread the rumor in O(log n) rounds with high probability (w.h.p.). For the simple quasirandom rumor spreading protocol proposed by Doerr, Friedrich, and Sauerwald (2008), [log n] random bits per node are sufficient. A lower bound by Doerr and Fouz (2009) shows that this is asymptotically tight for a slightly more general class of protocols, the so-called gate-model. In this paper, we consider general rumor spreading protocols. We provide a simple push-protocol that requires only a total of O(n log log n) random bits (i.e., on average O(log log n) bits per node) in order to spread the rumor in O(log n) rounds w.h.p. We also investigate the theoretical minimal randomness requirements of efficient rumor spreading. We prove the existence of a (non-uniform) push-protocol for which a total of 2 log n + log log n + o(log log n) random bits suffice to spread the rumor in log n + ln n + O(1) rounds with probability 1 − o(1). This is contrasted by a simple time-randomness tradeoff for the class of all rumor spreading protocols, according to which any protocol that uses log n − log log n − ω(1) random bits requires ω(log n) rounds to spread the rumor."
]
}
|
1104.2944
|
2949310786
|
In this paper, we study the question of how efficiently a collection of interconnected nodes can perform a global computation in the widely studied GOSSIP model of communication. In this model, nodes do not know the global topology of the network, and they may only initiate contact with a single neighbor in each round. This model contrasts with the much less restrictive LOCAL model, where a node may simultaneously communicate with all of its neighbors in a single round. A basic question in this setting is how many rounds of communication are required for the information dissemination problem, in which each node has some piece of information and is required to collect all others. In this paper, we give an algorithm that solves the information dissemination problem in at most @math rounds in a network of diameter @math , withno dependence on the conductance. This is at most an additive polylogarithmic factor from the trivial lower bound of @math , which applies even in the LOCAL model. In fact, we prove that something stronger is true: any algorithm that requires @math rounds in the LOCAL model can be simulated in @math rounds in the GOSSIP model. We thus prove that these two models of distributed computation are essentially equivalent.
|
Apart from the uniform randomized algorithm, additional algorithms were suggested for spreading information. We shortly overview some of these approaches. @cite_16 introduce rumor spreading, in which a node chooses its next communication partner by deterministically going over its list of neighbors, but the starting point of the list is chosen at random. Results are @math rounds for a complete graph and the hypercube, as well as improved complexities for other families of graphs compared to the randomized rumor spreading algorithm with uniform distribution over neighbors. This was followed by further analysis of the quasi-random algorithm @cite_17 @cite_19 . A hybrid algorithm, alternating between deterministic and randomized choices @cite_28 , was shown to achieve information spreading in @math round, w.h.p., where @math is the of the graph, a measure of connectivity of subsets in the graph. Distance-based bounds were given for nodes placed with uniform density in @math @cite_20 @cite_13 , which also address gossip-based solutions to specific problems such as resource location and minimum spanning tree.
|
{
"cite_N": [
"@cite_28",
"@cite_19",
"@cite_16",
"@cite_13",
"@cite_20",
"@cite_17"
],
"mid": [
"1991545795",
"2078958987",
"",
"2163636580",
"1990476939",
"1771950282"
],
"abstract": [
"Gathering data from nodes in a network is at the heart of many distributed applications, most notably, while performing a global task. We consider information spreading among n nodes of a network, where each node v has a message m(v) which must be received by all other nodes. The time required for information spreading has been previously upper-bounded with an inverse relationship to the conductance of the underlying communication graph. This implies high running times for graphs with small conductance. The main contribution of this paper is an information spreading algorithm which overcomes communication bottlenecks and thus achieves fast information spreading for a wide class of graphs, despite their small conductance. As a key tool in our study we use the recently defined concept of weak conductance, a generalization of classic graph conductance which measures how well-connected the components of a graph are. Our hybrid algorithm, which alternates between random and deterministic communication phases, exploits the connectivity within components by first applying partial information spreading, after which messages are sent across bottlenecks, thus spreading further throughout the network. This yields substantial improvements over the best known running times of algorithms for information spreading on any graph that has a large weak conductance, from polynomial to polylogarithmic number of rounds. We demonstrate the power of fast information spreading in accomplishing global tasks on the leader election problem, which lies at the core of distributed computing. Our results yield an algorithm for leader election that has a scalable running time on graphs with large weak conductance, improving significantly upon previous results.",
"In this paper, we provide a detailed comparison between a fully randomized protocol for rumor spreading on a complete graph and a quasirandom protocol introduced by Doerr, Friedrich, and Sauerwald [Quasirandom rumor spreading, in Proceedings of the 19th Annual ACM-SIAM Symposium on Discrete Algorithms, ACM, New York, SIAM, Philadelphia, 2008, pp. 773-781]. In the former, initially there is one vertex which holds a piece of information, and during each round every one of the informed vertices chooses uniformly at random and independently one of its neighbors and informs it. In the quasirandom version of this method (cf. Doerr, Friedrich, and Sauerwald) each vertex has a cyclic list of its neighbors. Once a vertex has been informed, it chooses uniformly at random only one neighbor. In the following round, it informs this neighbor, and at each subsequent round it picks the next neighbor from its list and informs it. We give a precise analysis of the evolution of the quasirandom protocol on the complete graph with @math vertices and show that it evolves essentially in the same way as the randomized protocol. In particular, if @math denotes the number of rounds that are needed until all vertices are informed, we show that for any slowly growing function @math , we have @math , with probability @math .",
"",
"In recent years, gossip-based algorithms have gained prominence as a methodology for designing robust and scalable communication schemes in large distributed systems. The premise underlying distributed gossip is very simple: in each time step, each node v in the system selects some other node w as a communication partner, generally by a simple randomized rule, and exchanges information with w; over a period of time, information spreads through the system in an \"epidemic fashion\". A fundamental issue which is not well understood is the following: how does the underlying low-level gossip mechanism (the means by which communication partners are chosen) affect one's ability to design efficient high-level gossip-based protocols? We establish one of the first concrete results addressing this question, by showing a fundamental limitation on the power of the commonly used uniform gossip mechanism for solving nearest-resource location problems. In contrast, very efficient protocols for this problem can be designed using a non-uniform spatial gossip mechanism, as established in earlier work with Alan Demers. We go on to consider the design of protocols for more complex problems, providing an efficient distributed gossip-based protocol for a set of nodes in Euclidean space to construct an approximate minimum spanning tree. Here too, we establish a contrasting limitation on the power of uniform gossip for solving this problem. Finally, we investigate gossip-based packet routing as a primitive that underpins the communication patterns in many protocols, and as a way to understand the capabilities of different gossip mechanisms at a general level.",
"The dynamic behavior of a network in which information is changing continuously over time requires robust and efficient mechanisms for keeping nodes updated about new information. Gossip protocols are mechanisms for this task in which nodes communicate with one another according to some underlying deterministic or randomized algorithm, exchanging information in each communication step. In a variety of contexts, the use of randomization to propagate information has been found to provide better reliability and scalability than more regimented deterministic approaches.In many settings, such as a cluster of distributed computing hosts, new information is generated at individual nodes, and is most \"interesting\" to nodes that are nearby. Thus, we propose distance-based propagation bounds as a performance measure for gossip mechanisms: a node at distance d from the origin of a new piece of information should be able to learn about this information with a delay that grows slowly with d, and is independent of the size of the network.For nodes arranged with uniform density in Euclidean space, we present natural gossip mechanisms, called spatial gossip, that satisfy such a guarantee: new information is spread to nodes at distance d, with high probability, in O(log1 + e d) time steps. Such a bound combines the desirable qualitative features of uniform gossip, in which information is spread with a delay that is logarithmic in the full network size, and deterministic flooding, in which information is spread with a delay that is linear in the distance and independent of the network size. Our mechanisms and their analysis resolve a conjecture of [1987].We further show an application of our gossip mechanisms to a basic resource location problem, in which nodes seek to rapidly learn the location of the nearest copy of a resource in a network. This problem, which is of considerable practical importance, can be solved by a very simple protocol using Spatial Gossip, whereas we can show that no protocol built on top of uniform gossip can inform nodes of their approximately nearest resource within poly-logarithmic time. The analysis relies on an additional useful property of spatial gossip, namely that information travels from its source to sinks along short paths not visiting points of the network far from the two nodes.",
"Randomized rumor spreading is an efficient protocol to distribute information in networks. Recently, a quasirandom version has been proposed and proven to work equally well on many graphs and better for sparse random graphs. In this work we show three main results for the quasirandom rumor spreading model. We exhibit a natural expansion property for networks which suffices to make quasirandom rumor spreading inform all nodes of the network in logarithmic time with high probability. This expansion property is satisfied, among others, by many expander graphs, random regular graphs, and Erdős-Renyi random graphs. For all network topologies, we show that if one of the push or pull model works well, so does the other. We also show that quasirandom rumor spreading is robust against transmission failures. If each message sent out gets lost with probability f , then the runtime increases only by a factor of @math ."
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.