aid
stringlengths 9
15
| mid
stringlengths 7
10
| abstract
stringlengths 78
2.56k
| related_work
stringlengths 92
1.77k
| ref_abstract
dict |
---|---|---|---|---|
1005.2603
|
1802422466
|
This paper presents a concise tutorial on spectral clustering for broad spectrum graphs which include unipartite (undirected) graph, bipartite graph, and directed graph. We show how to transform bipartite graph and directed graph into corresponding unipartite graph, therefore allowing a unified treatment to all cases. In bipartite graph, we show that the relaxed solution to the @math -way co-clustering can be found by computing the left and right eigenvectors of the data matrix. This gives a theoretical basis for @math -way spectral co-clustering algorithms proposed in the literatures. We also show that solving row and column co-clustering is equivalent to solving row and column clustering separately, thus giving a theoretical support for the claim: column clustering implies row clustering and vice versa''. And in the last part, we generalize the Ky Fan theorem---which is the central theorem for explaining spectral clustering---to rectangular complex matrix motivated by the results from bipartite graph analysis.
|
@cite_16 discuss how to extend the so-called modularity---which is equivalent to the graph cuts objective---of unipartite graph to directed graph. They form an asymmetric modularity matrix @math by applying modularity function to emphasizes the importance of the edge directions to the original asymmetric af finity matrix @math , and then transform @math into a symmetric matrix by adding @math to its transpose. The clustering is done by calculating the first @math eigenvectors of this symmetric matrix. This is equivalent to applying to @math .
|
{
"cite_N": [
"@cite_16"
],
"mid": [
"2063251739"
],
"abstract": [
"We consider the problem of finding communities or modules in directed networks. In the past, the most common approach to this problem has been to ignore edge direction and apply methods developed for community discovery in undirected networks, but this approach discards potentially useful information contained in the edge directions. Here we show how the widely used community finding technique of modularity maximization can be generalized in a principled fashion to incorporate information contained in edge directions. We describe an explicit algorithm based on spectral optimization of the modularity and show that it gives demonstrably better results than previous methods on a variety of test networks, both real and computer generated."
]
}
|
1005.2603
|
1802422466
|
This paper presents a concise tutorial on spectral clustering for broad spectrum graphs which include unipartite (undirected) graph, bipartite graph, and directed graph. We show how to transform bipartite graph and directed graph into corresponding unipartite graph, therefore allowing a unified treatment to all cases. In bipartite graph, we show that the relaxed solution to the @math -way co-clustering can be found by computing the left and right eigenvectors of the data matrix. This gives a theoretical basis for @math -way spectral co-clustering algorithms proposed in the literatures. We also show that solving row and column co-clustering is equivalent to solving row and column clustering separately, thus giving a theoretical support for the claim: column clustering implies row clustering and vice versa''. And in the last part, we generalize the Ky Fan theorem---which is the central theorem for explaining spectral clustering---to rectangular complex matrix motivated by the results from bipartite graph analysis.
|
@cite_9 propose a method for transforming the af finity matrix induced from a directed graph into a symmetric matrix without ignoring the edge directions. So, clustering algorithms built for unipartite graph can be applied unchanged.
|
{
"cite_N": [
"@cite_9"
],
"mid": [
"1551713770"
],
"abstract": [
"To identify communities in directed networks, we propose a generalized form of modularity in directed networks by presenting the quantity LinkRank, which can be considered as the PageRank of links. This generalization is consistent with the original modularity in undirected networks and the modularity optimization methods developed for undirected networks can be directly applied to directed networks by optimizing our modified modularity. Also, a model network, which can be used as a benchmark network in further community studies, is proposed to verify our method. Our method is supposed to find communities effectively in citation- or reference-based directed networks."
]
}
|
1005.1545
|
2952621167
|
Semi-supervised support vector machines (S3VMs) are a kind of popular approaches which try to improve learning performance by exploiting unlabeled data. Though S3VMs have been found helpful in many situations, they may degenerate performance and the resultant generalization ability may be even worse than using the labeled data only. In this paper, we try to reduce the chance of performance degeneration of S3VMs. Our basic idea is that, rather than exploiting all unlabeled data, the unlabeled instances should be selected such that only the ones which are very likely to be helpful are exploited, while some highly risky unlabeled instances are avoided. We propose the S3VM- method by using hierarchical clustering to select the unlabeled instances. Experiments on a broad range of data sets over eighty-eight different settings show that the chance of performance degeneration of S3VM- is much smaller than that of existing S3VMs.
|
Roughly speaking, existing semi-supervised learning approaches mainly fall into four categories. The first category is generative methods, e.g., @cite_21 @cite_24 , which extend supervised generative models by exploiting unlabeled data in parameter estimation and label estimation using techniques such as the EM method. The second category is graph-based methods, e.g., @cite_0 @cite_20 @cite_15 , which encode both the labeled and unlabeled instances in a graph and then perform label propagation on the graph. The third category is disagreement-based methods, e.g., @cite_26 @cite_25 , which employ multiple learners and improve the learners through labeling the unlabeled data based on the exploitation of disagreement among the learners. The fourth category is S3VMs, e.g., @cite_16 @cite_4 , which use unlabeled data to regularize the decision boundary to go through low density regions @cite_8 .
|
{
"cite_N": [
"@cite_26",
"@cite_4",
"@cite_8",
"@cite_21",
"@cite_24",
"@cite_0",
"@cite_15",
"@cite_16",
"@cite_25",
"@cite_20"
],
"mid": [
"2048679005",
"2107008379",
"",
"2137054688",
"",
"1585385982",
"2154455818",
"2107968230",
"2133556223",
""
],
"abstract": [
"We consider the problem of using a large unlabeled sample to boost performance of a learning algorit,hrn when only a small set of labeled examples is available. In particular, we consider a problem setting motivated by the task of learning to classify web pages, in which the description of each example can be partitioned into two distinct views. For example, the description of a web page can be partitioned into the words occurring on that page, and the words occurring in hyperlinks t,hat point to that page. We assume that either view of the example would be sufficient for learning if we had enough labeled data, but our goal is to use both views together to allow inexpensive unlabeled data to augment, a much smaller set of labeled examples. Specifically, the presence of two distinct views of each example suggests strategies in which two learning algorithms are trained separately on each view, and then each algorithm’s predictions on new unlabeled examples are used to enlarge the training set of the other. Our goal in this paper is to provide a PAC-style analysis for this setting, and, more broadly, a PAC-style framework for the general problem of learning from both labeled and unlabeled data. We also provide empirical results on real web-page data indicating that this use of unlabeled examples can lead to significant improvement of hypotheses in practice. *This research was supported in part by the DARPA HPKB program under contract F30602-97-1-0215 and by NSF National Young investigator grant CCR-9357793. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. TO copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and or a fee. COLT 98 Madison WI USA Copyright ACM 1998 l-58113-057--0 98 7... 5.00 92 Tom Mitchell School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213-3891 [email protected]",
"",
"",
"We address statistical classifier design given a mixed training set consisting of a small labelled feature set and a (generally larger) set of unlabelled features. This situation arises, e.g., for medical images, where although training features may be plentiful, expensive expertise is required to extract their class labels. We propose a classifier structure and learning algorithm that make effective use of unlabelled data to improve performance. The learning is based on maximization of the total data likelihood, i.e. over both the labelled and unlabelled data subsets. Two distinct EM learning algorithms are proposed, differing in the EM formalism applied for unlabelled data. The classifier, based on a joint probability model for features and labels, is a \"mixture of experts\" structure that is equivalent to the radial basis function (RBF) classifier, but unlike RBFs, is amenable to likelihood-based training. The scope of application for the new method is greatly extended by the observation that test data, or any new data to classify, is in fact additional, unlabelled data - thus, a combined learning classification operation - much akin to what is done in image segmentation - can be invoked whenever there is new data to classify. Experiments with data sets from the UC Irvine database demonstrate that the new learning algorithms and structure achieve substantial performance gains over alternative approaches.",
"",
"",
"We consider the general problem of learning from labeled and unlabeled data, which is often called semi-supervised learning or transductive inference. A principled approach to semi-supervised learning is to design a classifying function which is sufficiently smooth with respect to the intrinsic structure collectively revealed by known labeled and unlabeled points. We present a simple algorithm to obtain such a smooth solution. Our method yields encouraging experimental results on a number of classification problems and demonstrates effective use of unlabeled data.",
"We introduce a semi-supervised support vector machine (S3VM) method. Given a training set of labeled data and a working set of unlabeled data, S3VM constructs a support vector machine using both the training and working sets. We use S3VM to solve the transduction problem using overall risk minimization (ORM) posed by Vapnik. The transduction problem is to estimate the value of a classification function at the given points in the working set. This contrasts with the standard inductive learning problem of estimating the classification function at all possible values and then using the fixed function to deduce the classes of the working set data. We propose a general S3VM model that minimizes both the misclassification error and the function capacity based on all the available data. We show how the S3VM model for 1-norm linear support vector machines can be converted to a mixed-integer program and then solved exactly using integer programming. Results of S3VM and the standard 1-norm support vector machine approach are compared on ten data sets. Our computational results support the statistical learning theory results showing that incorporating working data improves generalization when insufficient training information is available. In every case, S3VM either improved or showed no significant difference in generalization compared to the traditional approach.",
"In many practical data mining applications, such as Web page classification, unlabeled training examples are readily available, but labeled ones are fairly expensive to obtain. Therefore, semi-supervised learning algorithms such as co-training have attracted much attention. In this paper, a new co-training style semi-supervised learning algorithm, named tri-training, is proposed. This algorithm generates three classifiers from the original labeled example set. These classifiers are then refined using unlabeled examples in the tri-training process. In detail, in each round of tri-training, an unlabeled example is labeled for a classifier if the other two classifiers agree on the labeling, under certain conditions. Since tri-training neither requires the instance space to be described with sufficient and redundant views nor does it put any constraints on the supervised learning algorithm, its applicability is broader than that of previous co-training style algorithms. Experiments on UCI data sets and application to the Web page classification task indicate that tri-training can effectively exploit unlabeled data to enhance the learning performance.",
""
]
}
|
1005.1545
|
2952621167
|
Semi-supervised support vector machines (S3VMs) are a kind of popular approaches which try to improve learning performance by exploiting unlabeled data. Though S3VMs have been found helpful in many situations, they may degenerate performance and the resultant generalization ability may be even worse than using the labeled data only. In this paper, we try to reduce the chance of performance degeneration of S3VMs. Our basic idea is that, rather than exploiting all unlabeled data, the unlabeled instances should be selected such that only the ones which are very likely to be helpful are exploited, while some highly risky unlabeled instances are avoided. We propose the S3VM- method by using hierarchical clustering to select the unlabeled instances. Experiments on a broad range of data sets over eighty-eight different settings show that the chance of performance degeneration of S3VM- is much smaller than that of existing S3VMs.
|
Though semi-supervised learning approaches have shown promising performances in many situations, it has been indicated by many authors that using unlabeled data may hurt the performance @cite_24 @cite_5 @cite_28 @cite_25 @cite_9 @cite_19 @cite_10 @cite_29 . In some application areas, especially the ones which require high reliability, users might be reluctant to use semi-supervised learning approaches due to the worry of obtaining a performance worse than simply neglecting unlabeled data. As typical semi-supervised learning approaches, S3VMs also suffer from this deficiency.
|
{
"cite_N": [
"@cite_28",
"@cite_9",
"@cite_29",
"@cite_24",
"@cite_19",
"@cite_5",
"@cite_10",
"@cite_25"
],
"mid": [
"2170569305",
"2125592902",
"2114718442",
"",
"2110316582",
"",
"1589919686",
"2133556223"
],
"abstract": [
"This paper analyzes the performance of semi-supervised learning of mixture models. We show that unlabeled data can lead to an increase in classification error even in situations where additional labeled data would decrease classification error. We present a mathematical analysis of this \"degradation\" phenomenon and show that it is due to the fact that bias may be adversely affected by unlabeled data. We discuss the impact of these theoretical results to practical situations.",
"There has been increased interest in devising learning techniques that combine unlabeled data with labeled data -- i.e. semi-supervised learning. However, to the best of our knowledge, no study has been performed across various techniques and different types and amounts of labeled and unlabeled data. Moreover, most of the published work on semi-supervised learning techniques assumes that the labeled and unlabeled data come from the same distribution. It is possible for the labeling process to be associated with a selection bias such that the distributions of data points in the labeled and unlabeled sets are different. Not correcting for such bias can result in biased function approximation with potentially poor performance. In this paper, we present an empirical study of various semi-supervised learning techniques on a variety of datasets. We attempt to answer various questions such as the effect of independence or relevance amongst features, the effect of the size of the labeled and unlabeled sets and the effect of noise. We also investigate the impact of sample-selection bias on the semi -supervised learning techniques under study and implement a bivariate probit technique particularly designed to correct for such bias.",
"Empirical evidence shows that in favorable situations semi-supervised learning (SSL) algorithms can capitalize on the abundance of unlabeled training data to improve the performance of a learning task, in the sense that fewer labeled training data are needed to achieve a target error bound. However, in other situations unlabeled data do not seem to help. Recent attempts at theoretically characterizing SSL gains only provide a partial and sometimes apparently conflicting explanations of whether, and to what extent, unlabeled data can help. In this paper, we attempt to bridge the gap between the practice and theory of semi-supervised learning. We develop a finite sample analysis that characterizes the value of un-labeled data and quantifies the performance improvement of SSL compared to supervised learning. We show that there are large classes of problems for which SSL can significantly outperform supervised learning, in finite sample regimes and sometimes also in terms of error convergence rates.",
"",
"Semi-supervised methods use unlabeled data in addition to labeled data to construct predictors. While existing semi-supervised methods have shown some promising empirical performance, their development has been based largely based on heuristics. In this paper we study semi-supervised learning from the viewpoint of minimax theory. Our first result shows that some common methods based on regularization using graph Laplacians do not lead to faster minimax rates of convergence. Thus, the estimators that use the unlabeled data do not have smaller risk than the estimators that use only labeled data. We then develop several new approaches that provably lead to improved performance. The statistical tools of minimax analysis are thus used to offer some new perspective on the problem of semi-supervised learning.",
"",
"We study the potential benefits of unlabeled data to classification prediction to the learner. We compare learning in the semi-supervised model to the standard, supervised PAC (distribution free) model, considering both the realizable and the unrealizable (agnostic) settings. Roughly speaking, our conclusion is that access to unlabeled samples cannot provide sample size guarantees that are better than those obtainable without access to unlabeled data, unless one postulates very strong assumptions about the distribution of the labels. In particular, we prove that for basic hypothesis classes over the real line, if the distribution of unlabeled data is ‘smooth’, knowledge of that distribution cannot improve the labeled sample complexity by more than a constant factor (e.g., 2). We conjecture that a similar phenomena holds for any hypothesis class and any unlabeled data distribution. We also discuss the utility of semi-supervised learning under the common cluster assumption concerning the distribution of labels, and show that even in the most accommodating cases, where data is generated by two uni-modal label-homogeneous distributions, common SSL paradigms may be misleading and inflict poor prediction performance.",
"In many practical data mining applications, such as Web page classification, unlabeled training examples are readily available, but labeled ones are fairly expensive to obtain. Therefore, semi-supervised learning algorithms such as co-training have attracted much attention. In this paper, a new co-training style semi-supervised learning algorithm, named tri-training, is proposed. This algorithm generates three classifiers from the original labeled example set. These classifiers are then refined using unlabeled examples in the tri-training process. In detail, in each round of tri-training, an unlabeled example is labeled for a classifier if the other two classifiers agree on the labeling, under certain conditions. Since tri-training neither requires the instance space to be described with sufficient and redundant views nor does it put any constraints on the supervised learning algorithm, its applicability is broader than that of previous co-training style algorithms. Experiments on UCI data sets and application to the Web page classification task indicate that tri-training can effectively exploit unlabeled data to enhance the learning performance."
]
}
|
1005.1545
|
2952621167
|
Semi-supervised support vector machines (S3VMs) are a kind of popular approaches which try to improve learning performance by exploiting unlabeled data. Though S3VMs have been found helpful in many situations, they may degenerate performance and the resultant generalization ability may be even worse than using the labeled data only. In this paper, we try to reduce the chance of performance degeneration of S3VMs. Our basic idea is that, rather than exploiting all unlabeled data, the unlabeled instances should be selected such that only the ones which are very likely to be helpful are exploited, while some highly risky unlabeled instances are avoided. We propose the S3VM- method by using hierarchical clustering to select the unlabeled instances. Experiments on a broad range of data sets over eighty-eight different settings show that the chance of performance degeneration of S3VM- is much smaller than that of existing S3VMs.
|
The usefulness of unlabeled data has been discussed theoretically @cite_19 @cite_10 @cite_29 and validated empirically @cite_9 . Many literatures indicated that unlabeled data should be used carefully. For generative methods, @cite_28 showed that unlabeled data can increase error even in situations where additional labeled data would decrease error. One main conjecture on the performance degeneration is attributed to the difficulties of making a right model assumption which prevents the performance from degenerated by fitting with unlabeled data. For graph-based methods, more and more researchers recognize that graph construction is more crucial than how the labels are propagated, and some attempts have been devoted to using domain knowledge or constructing robust graphs @cite_2 @cite_17 . As for disagreement-based method, the generalization ability has been studied with plentiful theoretical results based on different assumptions @cite_26 @cite_13 @cite_22 @cite_14 . As for S3VMs, the correctness of the S3VM objective has been studied on small data sets @cite_18 .
|
{
"cite_N": [
"@cite_13",
"@cite_18",
"@cite_26",
"@cite_14",
"@cite_22",
"@cite_28",
"@cite_29",
"@cite_9",
"@cite_19",
"@cite_2",
"@cite_10",
"@cite_17"
],
"mid": [
"2140076625",
"2122565017",
"2048679005",
"",
"1824737917",
"2170569305",
"2114718442",
"2125592902",
"2110316582",
"2136504847",
"1589919686",
"2141923507"
],
"abstract": [
"The rule-based bootstrapping introduced by Yarowsky, and its co-training variant by Blum and Mitchell, have met with considerable empirical success. Earlier work on the theory of co-training has been only loosely related to empirically useful co-training algorithms. Here we give a new PAC-style bound on generalization error which justifies both the use of confidences — partial rules and partial labeling of the unlabeled data — and the use of an agreement-based objective function as suggested by Collins and Singer. Our bounds apply to the multiclass case, i.e., where instances are to be assigned one of labels for k ≥ 2.",
"Due to its wide applicability, the problem of semi-supervised classification is attracting increasing attention in machine learning. Semi-Supervised Support Vector Machines (S3VMs) are based on applying the margin maximization principle to both labeled and unlabeled examples. Unlike SVMs, their formulation leads to a non-convex optimization problem. A suite of algorithms have recently been proposed for solving S3VMs. This paper reviews key ideas in this literature. The performance and behavior of various S3VMs algorithms is studied together, under a common experimental setting.",
"We consider the problem of using a large unlabeled sample to boost performance of a learning algorit,hrn when only a small set of labeled examples is available. In particular, we consider a problem setting motivated by the task of learning to classify web pages, in which the description of each example can be partitioned into two distinct views. For example, the description of a web page can be partitioned into the words occurring on that page, and the words occurring in hyperlinks t,hat point to that page. We assume that either view of the example would be sufficient for learning if we had enough labeled data, but our goal is to use both views together to allow inexpensive unlabeled data to augment, a much smaller set of labeled examples. Specifically, the presence of two distinct views of each example suggests strategies in which two learning algorithms are trained separately on each view, and then each algorithm’s predictions on new unlabeled examples are used to enlarge the training set of the other. Our goal in this paper is to provide a PAC-style analysis for this setting, and, more broadly, a PAC-style framework for the general problem of learning from both labeled and unlabeled data. We also provide empirical results on real web-page data indicating that this use of unlabeled examples can lead to significant improvement of hypotheses in practice. *This research was supported in part by the DARPA HPKB program under contract F30602-97-1-0215 and by NSF National Young investigator grant CCR-9357793. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. TO copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and or a fee. COLT 98 Madison WI USA Copyright ACM 1998 l-58113-057--0 98 7... 5.00 92 Tom Mitchell School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213-3891 [email protected]",
"",
"Co-training is a semi-supervised learning paradigm which trains two learners respectively from two different views and lets the learners label some unlabeled examples for each other. In this paper, we present a new PAC analysis on co-training style algorithms. We show that the co-training process can succeed even without two views, given that the two learners have large difference, which explains the success of some co-training style algorithms that do not require two views. Moreover, we theoretically explain that why the co-training process could not improve the performance further after a number of rounds, and present a rough estimation on the appropriate round to terminate co-training to avoid some wasteful learning rounds.",
"This paper analyzes the performance of semi-supervised learning of mixture models. We show that unlabeled data can lead to an increase in classification error even in situations where additional labeled data would decrease classification error. We present a mathematical analysis of this \"degradation\" phenomenon and show that it is due to the fact that bias may be adversely affected by unlabeled data. We discuss the impact of these theoretical results to practical situations.",
"Empirical evidence shows that in favorable situations semi-supervised learning (SSL) algorithms can capitalize on the abundance of unlabeled training data to improve the performance of a learning task, in the sense that fewer labeled training data are needed to achieve a target error bound. However, in other situations unlabeled data do not seem to help. Recent attempts at theoretically characterizing SSL gains only provide a partial and sometimes apparently conflicting explanations of whether, and to what extent, unlabeled data can help. In this paper, we attempt to bridge the gap between the practice and theory of semi-supervised learning. We develop a finite sample analysis that characterizes the value of un-labeled data and quantifies the performance improvement of SSL compared to supervised learning. We show that there are large classes of problems for which SSL can significantly outperform supervised learning, in finite sample regimes and sometimes also in terms of error convergence rates.",
"There has been increased interest in devising learning techniques that combine unlabeled data with labeled data -- i.e. semi-supervised learning. However, to the best of our knowledge, no study has been performed across various techniques and different types and amounts of labeled and unlabeled data. Moreover, most of the published work on semi-supervised learning techniques assumes that the labeled and unlabeled data come from the same distribution. It is possible for the labeling process to be associated with a selection bias such that the distributions of data points in the labeled and unlabeled sets are different. Not correcting for such bias can result in biased function approximation with potentially poor performance. In this paper, we present an empirical study of various semi-supervised learning techniques on a variety of datasets. We attempt to answer various questions such as the effect of independence or relevance amongst features, the effect of the size of the labeled and unlabeled sets and the effect of noise. We also investigate the impact of sample-selection bias on the semi -supervised learning techniques under study and implement a bivariate probit technique particularly designed to correct for such bias.",
"Semi-supervised methods use unlabeled data in addition to labeled data to construct predictors. While existing semi-supervised methods have shown some promising empirical performance, their development has been based largely based on heuristics. In this paper we study semi-supervised learning from the viewpoint of minimax theory. Our first result shows that some common methods based on regularization using graph Laplacians do not lead to faster minimax rates of convergence. Thus, the estimators that use the unlabeled data do not have smaller risk than the estimators that use only labeled data. We then develop several new approaches that provably lead to improved performance. The statistical tools of minimax analysis are thus used to offer some new perspective on the problem of semi-supervised learning.",
"Door lock apparatus in which a door latch mechanism is operated by inner and outer door handles coupled to a latch shaft extending through the latch mechanism. Handles are coupled to ends of latch shaft by coupling devices enabling door to be locked from the inside to prevent entry from the outside but can still be opened from the inside by normal operation of outside handle. Inside coupling device has limited lost-motion which is used to operate cam device to unlock the door on actuation of inner handles.",
"We study the potential benefits of unlabeled data to classification prediction to the learner. We compare learning in the semi-supervised model to the standard, supervised PAC (distribution free) model, considering both the realizable and the unrealizable (agnostic) settings. Roughly speaking, our conclusion is that access to unlabeled samples cannot provide sample size guarantees that are better than those obtainable without access to unlabeled data, unless one postulates very strong assumptions about the distribution of the labels. In particular, we prove that for basic hypothesis classes over the real line, if the distribution of unlabeled data is ‘smooth’, knowledge of that distribution cannot improve the labeled sample complexity by more than a constant factor (e.g., 2). We conjecture that a similar phenomena holds for any hypothesis class and any unlabeled data distribution. We also discuss the utility of semi-supervised learning under the common cluster assumption concerning the distribution of labels, and show that even in the most accommodating cases, where data is generated by two uni-modal label-homogeneous distributions, common SSL paradigms may be misleading and inflict poor prediction performance.",
"Graph based semi-supervised learning (SSL) methods play an increasingly important role in practical machine learning systems. A crucial step in graph based SSL methods is the conversion of data into a weighted graph. However, most of the SSL literature focuses on developing label inference algorithms without extensively studying the graph building method and its effect on performance. This article provides an empirical study of leading semi-supervised methods under a wide range of graph construction algorithms. These SSL inference algorithms include the Local and Global Consistency (LGC) method, the Gaussian Random Field (GRF) method, the Graph Transduction via Alternating Minimization (GTAM) method as well as other techniques. Several approaches for graph construction, sparsification and weighting are explored including the popular k-nearest neighbors method (kNN) and the b-matching method. As opposed to the greedily constructed kNN graph, the b-matched graph ensures each node in the graph has the same number of edges and produces a balanced or regular graph. Experimental results on both artificial data and real benchmark datasets indicate that b-matching produces more robust graphs and therefore provides significantly better prediction accuracy without any significant change in computation time."
]
}
|
1005.1454
|
2950747895
|
We present families of (hyper)elliptic curve which admit an efficient deterministic encoding function.
|
Compared to Icart's formulae @cite_0 , this encoding has two drawbacks of limited practical impact: [itemsep=0pt,parsep=0pt,partopsep=0pt,topsep=0pt] it does not work for any elliptic curves, but only for Hessian curves; the subset of the curve which can be parameterized is slightly smaller than in Icart's case: we get @math points against approximately @math .
|
{
"cite_N": [
"@cite_0"
],
"mid": [
"2131559681"
],
"abstract": [
"We describe a new explicit function that given an elliptic curve E defined over @math , maps elements of @math into E in deterministic polynomial time and in a constant number of operations over @math . The function requires to compute a cube root. As an application we show how to hash deterministically into an elliptic curve."
]
}
|
1005.1694
|
1719697784
|
The firefighter problem is a monotone dynamic process in graphs that can be viewed as modeling the use of a limited supply of vaccinations to stop the spread of an epidemic. In more detail, a fire spreads through a graph, from burning vertices to their unprotected neighbors. In every round, a small amount of unburnt vertices can be protected by firefighters. How many firefighters per turn, on average, are needed to stop the fire from advancing? We prove tight lower and upper bounds on the amount of firefighters needed to control a fire in the Cartesian planar grid and in the strong planar grid, resolving two conjectures of Ng and Raff.
|
The firefighter problem is loosely connected with Conway's angel problem @cite_7 . This is a game of pursuit in @math , in which the angel can move to any point within @math -distance @math and the devil can destroy one unoccupied point per turn, bearing similarities to the @math case of the firefighter problem. The two main differences between the angel problem and the firefighter problem are The fire is , that is, it needs not choose its path in advance; The firefighters play a strategy, that is, they cannot adapt their strategy to the fire's advancement. It is known that for @math , where the fractional version is defined appropriately, the devil wins @cite_5 , and that for @math the angel wins @cite_6 @cite_13 @cite_3 @cite_14 . Our results, when presented as a variant of the angel problem in which the fire is more powerful, show that the threshold is @math instead of @math .
|
{
"cite_N": [
"@cite_14",
"@cite_7",
"@cite_3",
"@cite_6",
"@cite_5",
"@cite_13"
],
"mid": [
"2166602752",
"1517986078",
"2070582302",
"2106768299",
"1529287620",
""
],
"abstract": [
"We solve Conway's Angel Problem by showing that the Angel of power 2 has a winning strategy. An old observation of Conway is that we may suppose without loss of generality that the Angel never jumps to a square where he could have already landed at a previous time. We turn this observation around and prove that we may suppose without loss of generality that the Devil never eats a square where the Angel could have already jumped. Then we give a simple winning strategy for the Angel.",
"In the quarter of a century since three mathematicians and game theorists collaborated to create Winning Ways for Your Mathematical Plays, the book has become the definitive work on the subject of mathematical games. Now carefully revised and broken down into four volumes to accommodate new developments, the Second Edition retains the original's wealth of wit and wisdom. The authors' insightful strategies, blended with their witty and irreverent style, make reading a profitable pleasure. In Volume 4, the authors present a Diamond of a find, covering one-player games such as Solitaire.",
"We solve the Angel Problem, by describing a strategy that guarantees the win of an Angel of power 2 or greater. Basically, the Angel should move north as quickly as possible. However, he should detour around eaten squares, as long as the extra distance does not exceed twice the number of eaten squares evaded. We show that an Angel following this strategy will always spot a trap early enough to avoid it.",
"We show that in the game of angel and devil, played on the planar integer lattice, the angel of power 4 can evade the devil. This answers a question of Berlekamp, Conway and Guy. Independent proofs that work for the angel of power 2 have been given by Kloster and by Mathe.",
"",
""
]
}
|
1005.0106
|
2952113529
|
This work proposes a new distributed and self-organized authentication scheme for Mobile Ad-hoc NETworks (MANETs). Apart from describing all its components, special emphasis is placed on proving that the proposal fulfils most requirements derived from the special characteristics of MANETs, including limited physical protection of broadcast medium, frequent route changes caused by mobility, and lack of structured hierarchy. Interesting conclusions are obtained from an analysis of simulation experiments in different scenarios.
|
Another interesting identification paradigm that has been used in wireless ad-hoc networks is the notion of chain of trust @cite_16 , but it fails if malicious nodes are within the network. Another typical solution is location-limited authentication, which is based on the fact that most ad-hoc networks exist in small areas and physical authentication may be carried out between nodes that are close to each other. However, the location-limited authentication is not feasible for large, group-based settings.
|
{
"cite_N": [
"@cite_16"
],
"mid": [
"2069519268"
],
"abstract": [
"So far, research on mobile ad hoc networks has been forcused primarily on routing issues. Security, on the other hand, has been given a lower priority. This paper provides an overview of security problems for mobile ad hoc networks, distinguishing the threats on basic mechanisms and on security mechanisms. It then describes our solution to protect the security mechanisms. The original features of this solution include that (i) it is fully decentralized and (ii) all nodes are assigned equivalent roles."
]
}
|
1005.0106
|
2952113529
|
This work proposes a new distributed and self-organized authentication scheme for Mobile Ad-hoc NETworks (MANETs). Apart from describing all its components, special emphasis is placed on proving that the proposal fulfils most requirements derived from the special characteristics of MANETs, including limited physical protection of broadcast medium, frequent route changes caused by mobility, and lack of structured hierarchy. Interesting conclusions are obtained from an analysis of simulation experiments in different scenarios.
|
Later, @cite_17 developed a group access control framework based on a menu of cryptographic techniques, which included simple access control policies, such as static ACLs (Access Control Lists), as well as admission based on the decision of a fixed entity: external (e.g., a CA or a Trusted Third Party) or internal (e.g., a group founder). The main drawback of such a proposal is that those policies are inflexible and unsuitable for dynamic ad-hoc networks. For instance, static ACLs enumerate all possible members and hence cannot support truly dynamic membership, and admission decisions made by a Trusted Third Party (TTP) or a group founder violate the peer nature of the underlying ad-hoc group.
|
{
"cite_N": [
"@cite_17"
],
"mid": [
"2104702759"
],
"abstract": [
"Security in collaborative peer groups is an active research topic. Most previous work focused on key management without addressing an important pre-requisite: admission control, i.e., how to securely admit a new member. This paper represents an initial attempt to sketch out an admission control framework suitable for different flavors of peer groups and match them with appropriate cryptographic techniques and protocols. Open problems and directions for future work are identified and discussed."
]
}
|
1005.0106
|
2952113529
|
This work proposes a new distributed and self-organized authentication scheme for Mobile Ad-hoc NETworks (MANETs). Apart from describing all its components, special emphasis is placed on proving that the proposal fulfils most requirements derived from the special characteristics of MANETs, including limited physical protection of broadcast medium, frequent route changes caused by mobility, and lack of structured hierarchy. Interesting conclusions are obtained from an analysis of simulation experiments in different scenarios.
|
Up to now, very few publications have mentioned the proposal of authentication systems for ad-hoc networks using ZKPs. Two of them are @cite_5 and @cite_0 , but none dealt with the related problem of topology changes in the network. Another recent ZKP-based proposal for MANETs related with the one proposed here was the hierarchical scheme described in @cite_13 , where two different security levels were defined through the use of a hard-on-average graph problem, but again no topology changes were considered. On the other hand, two works that may be considered the seed of this work are @cite_23 and @cite_1 . The main differences between the proposal of this paper and both references are the following: definition of node life-cycle, analysis of possible attacks, description of necessary assumptions, provision of a larger example, more data about performance analysis, and a comparison with existent solutions.
|
{
"cite_N": [
"@cite_1",
"@cite_0",
"@cite_23",
"@cite_5",
"@cite_13"
],
"mid": [
"1570801071",
"1619974530",
"1481977563",
"2145566469",
"2167118281"
],
"abstract": [
"This work provides both a new global authentication system for Mobile Ad-hoc NETworks and a study of its simulation with NS-2. The proposed scheme is constructed in a self-organizing manner, which implies the fulfillment of most requirements for this type of networks, such as adaptation to the changing topology of the network, public availability of broadcast communications and strength of access control.",
"This paper describes a new protocol for authentication in ad-hoc networks. The protocol has been designed to meet specialized requirements of ad-hoc networks, such as lack of direct communication between nodes or requirements for revocable anonymity. At the same time, a ad-hoc authentication protocol must be resistant to spoofing, eavesdropping and playback, and man-in-the-middle attacks. The article analyzes existing authentication methods based on the Public Key Infrastructure, and finds that they have several drawbacks in ad-hoc networks. Therefore, a new authentication protocol, basing on established cryptographic primitives (Merkle's puzzles and zero-knowledge proofs) is proposed. The protocol is studied for a model ad-hoc chat application that provides private conversations.",
"This work proposes a new global authentication system for Mobile Ad-hoc Networks. The component algorithms are designed in a self-organizing way so that most needs of this sort of networks are covered. In particular, characteristics such as adaptation to the varying topology of the network, open availability of broadcast transmissions, and strong access control have received special attention when defining the new scheme. The described protocol is based on the cryptographic paradigm of Zero-Knowledge Proofs. In this paper the design is thought for the Hamiltonian Cycle Problem, but it might be easily adapted to other NP-complete graph problems.",
"In a mobile ad-hoc network (MANET) architecture, there is no pre-existing fixed network infrastructure, and a mobile node in this network sends data packets to a destination node directly or through its neighbor nodes. This situation is of potential security concern since the neighbor nodes cannot be always trusted. In this paper, we design a group member authentication protocol used in a MANET. It aims to allow a set of nodes to legitimately participate in group communication and then distribute a secret group key to the approved nodes to establish secure communication with group members. Our protocol provides knowledge-based group member authentication, which recognizes a list of secret group keys held in a mobile node as the node's group membership. It employs zero knowledge proof and threshold cryptography. We then introduce our actual implementation and evaluate the behavior to ensure its successful deployment",
"This work addresses the critical problem of authentication in mobile ad hoc networks. It includes a new approach based on the Zero-Knowledge cryptographic paradigm where two different security levels are defined. The first level is characterized by the use of an NP-complete graph problem to describe an Access Control Protocol, while the highest level corresponds to a Group Authentication Protocol based on a hard-on-average graph problem. The main goal of the proposal is to balance security strength and network performance. Therefore, both protocols are scalable and decentralized, and their requirements of communication, storage and computation are limited."
]
}
|
1004.4489
|
2953290265
|
We propose to use MapReduce to quickly test new retrieval approaches on a cluster of machines by sequentially scanning all documents. We present a small case study in which we use a cluster of 15 low cost ma- chines to search a web crawl of 0.5 billion pages showing that sequential scanning is a viable approach to running large-scale information retrieval experiments with little effort. The code is available to other researchers at: this http URL
|
The idea to use sequential scanning of documents to research new retrieval approaches is certainly not new: We know of at least one researcher who used sequential scanning over ten years ago for his thesis @cite_5 . Without high-level programming paradigms like MapReduce, however, efficiently implementing sequential scanning is not a trivial task, and without a cluster of machines the approach does not scale to large collections.
|
{
"cite_N": [
"@cite_5"
],
"mid": [
"2131133093"
],
"abstract": [
"Because of the world wide web, information retrieval systems are now used by millions of untrained users all over the world. The search engines that perform the information retrieval tasks, often retrieve thousands of potentially interesting documents to a query. The documents should be ranked in decreasing order of relevance in order to be useful to the user. This book describes a mathematical model of information retrieval based on the use of statistical language models. The approach uses simple document-based unigram models to compute for each document the probability that it generates the query. This probability is used to rank the documents. The study makes the following research contributions. * The development of a model that integrates term weighting, relevance feedback and structured queries. * The development of a model that supports multiple representations of a request or information need by integrating a statistical translation model. * The development of a model that supports multiple representations of a document, for instance by allowing proximity searches or searches for terms from a particular record field (e.g. a search for terms from the title). * A mathematical interpretation of stop word removal and stemming. * A mathematical interpretation of operators for mandatory terms, wildcards and synonyms. * A practical comparison of a language model-based retrieval system with similar systems that are based on well-established models and term weighting algorithms in a controlled experiment. * The application of the model to cross-language information retrieval and adaptive information filtering, and the evaluation of two prototype systems in a controlled experiment. Experimental results on three standard tasks show that the language model-based algorithms work as well as, or better than, today's top-performing retrieval algorithms. The standard tasks investigated are ad-hoc retrieval (when there are no previously retrieved documents to guide the search), retrospective relevance weighting (find the optimum model for a given set of relevant documents), and ad-hoc retrieval using manually formulated Boolean queries. The application to cross-language retrieval and adaptive filtering shows the practical use of respectively structured queries, and relevance feedback."
]
}
|
1004.4489
|
2953290265
|
We propose to use MapReduce to quickly test new retrieval approaches on a cluster of machines by sequentially scanning all documents. We present a small case study in which we use a cluster of 15 low cost ma- chines to search a web crawl of 0.5 billion pages showing that sequential scanning is a viable approach to running large-scale information retrieval experiments with little effort. The code is available to other researchers at: this http URL
|
Lin @cite_2 used Hadoop MapReduce for computing pairwise document similarities. Our implementation resembles Lin's brute force algorithm that also scans document representations linearly. Our approach is simpeler because our preprocessing step does not divide the collection into blocks, nor does it compute document vectors.
|
{
"cite_N": [
"@cite_2"
],
"mid": [
"2062028408"
],
"abstract": [
"This paper explores the problem of computing pairwise similarity on document collections, focusing on the application of \"more like this\" queries in the life sciences domain. Three MapReduce algorithms are introduced: one based on brute force, a second where the problem is treated as large-scale ad hoc retrieval, and a third based on the Cartesian product of postings lists. Each algorithm supports one or more approximations that trade effectiveness for efficiency, the characteristics of which are studied experimentally. Results show that the brute force algorithm is the most efficient of the three when exact similarity is desired. However, the other two algorithms support approximations that yield large efficiency gains without significant loss of effectiveness."
]
}
|
1004.4520
|
1998168484
|
This paper is a first study on the topic of achieving physical layer security by exploiting non-systematic channel codes. The chance of implementing transmission security at the physical layer is known since many years in information theory, but it is now gaining an increasing interest due to its many possible applications. It has been shown that channel coding techniques can be effectively exploited for designing physical layer security schemes, able to ensure that an unauthorized receiver, experiencing a channel different from that of the the authorized receiver, is not able to gather any information. Recently, it has been proposed to exploit puncturing techniques in order to reduce the security gap between the authorized and unauthorized channels. In this paper, we show that the same target can also be achieved by using non-systematic codes, able to scramble information bits within the transmitted codeword.
|
Several works have been devoted to the study of what transmission techniques are best suited to reduce the security gap. In particular, in @cite_4 , the authors propose the usage of punctured codes, by associating the secret bits to punctured bits. They consider punctured LDPC codes and prove that such technique, for a fixed secrecy rate, is able to guarantee a considerable reduction in the security gap with respect to non-punctured (systematic) transmission.
|
{
"cite_N": [
"@cite_4"
],
"mid": [
"2162970283"
],
"abstract": [
"In this paper we consider tandem error control coding and cryptography in the setting of the wiretap channel due to Wyner. In a typical communications system a cryptographic application is run at a layer above the physical layer and assumes the channel is error free. However, in any real application the channels for friendly users and passive eavesdroppers are not error free and Wyner's wiretap model addresses this scenario. Using this model, we show the security of a common cryptographic primitive, i.e. a keystream generator based on linear feedback shift registers (LFSR), can be strengthened by exploiting properties of the physical layer. A passive eavesdropper can be made to experience greater difficulty in cracking an LFSR-based cryptographic system insomuch that the computational complexity of discovering the secret key increases by orders of magnitude, or is altogether infeasible. This result is shown for two fast correlation attacks originally presented by Meier and Staffelbach, in the context of channel errors due to the wiretap channel model."
]
}
|
1004.3714
|
2122426217
|
This paper develops upper bounds on the end-to-end transmission capacity of multihop wireless networks. Potential source-destination paths are dynamically selected from a pool of randomly located relays, from which a closed-form lower bound on the outage probability is derived in terms of the expected number of potential paths. This is in turn used to provide an upper bound on the number of successful transmissions that can occur per unit area, which is known as the transmission capacity. The upper bound results from assuming independence among the potential paths, and can be viewed as the maximum diversity case. A useful aspect of the upper bound is its simple form for an arbitrary-sized network, which allows insights into how the number of hops and other network parameters affect spatial throughput in the nonasymptotic regime. The outage probability analysis is then extended to account for retransmissions with a maximum number of allowed attempts. In contrast to prevailing wisdom, we show that predetermined routing (such as nearest neighbor) is suboptimal, since more hops are not useful once the network is interference-limited. Our results also make clear that randomness in the location of relay sets and dynamically varying channel states is helpful in obtaining higher aggregate throughput, and that dynamic route selection should be used to exploit path diversity.
|
The best-known metric for studying end-to-end network capacity is the transport capacity @cite_14 @cite_31 @cite_4 . This framework pioneered many notable studies on the limiting scaling behavior of ad hoc networks with the number of nodes @math by showing that the maximum transport capacity scales as @math in arbitrary networks @cite_14 . The feasibility of this throughput scaling has also been shown in random networks by relaying all information via crossing paths constructed through the network @cite_21 . Several other researchers have extended this framework to more general operating regimes, e.g. @cite_16 @cite_13 . Their findings have shown that nearest neighbor multihop routing is order-optimal in the power-limited regime, while hopping across clusters with distributed multiple-input and multiple-output (MIMO) communication can achieve order-optimal throughput in bandwidth-limited and power-inefficient regimes. However, most of these results are shown and proven for asymptotically large networks, which may not accurately describe non-asymptotic conditions. Moreover, scaling laws do not provide much information on how other network parameters imposed by a specific transmission strategy affect the throughput.
|
{
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_21",
"@cite_31",
"@cite_16",
"@cite_13"
],
"mid": [
"2137775453",
"2114927595",
"2135356058",
"2079313517",
"2002649876",
"2138515392"
],
"abstract": [
"When n identical randomly located nodes, each capable of transmitting at W bits per second and using a fixed range, form a wireless network, the throughput spl lambda (n) obtainable by each node for a randomly chosen destination is spl Theta (W spl radic (nlogn)) bits per second under a noninterference protocol. If the nodes are optimally placed in a disk of unit area, traffic patterns are optimally assigned, and each transmission's range is optimally chosen, the bit-distance product that can be transported by the network per second is spl Theta (W spl radic An) bit-meters per second. Thus even under optimal circumstances, the throughput is only spl Theta (W spl radic n) bits per second for each node for a destination nonvanishingly far away. Similar results also hold under an alternate physical model where a required signal-to-interference ratio is specified for successful receptions. Fundamentally, it is the need for every node all over the domain to share whatever portion of the channel it is utilizing with nodes in its local neighborhood that is the reason for the constriction in capacity. Splitting the channel into several subchannels does not change any of the results. Some implications may be worth considering by designers. Since the throughput furnished to each user diminishes to zero as the number of users is increased, perhaps networks connecting smaller numbers of users, or featuring connections mostly with nearby neighbors, may be more likely to be find acceptance.",
"We consider networks consisting of nodes with radios, and without any wired infrastructure, thus necessitating all communication to take place only over the shared wireless medium. The main focus of this paper is on the effect of fading in such wireless networks. We examine the attenuation regime where either the medium is absorptive, a situation which generally prevails, or the path loss exponent is greater than 3. We study the transport capacity, defined as the supremum over the set of feasible rate vectors of the distance weighted sum of rates. We consider two assumption sets. Under the first assumption set, which essentially requires only a mild time average type of bound on the fading process, we show that the transport capacity can grow no faster than O(n), where n denotes the number of nodes, even when the channel state information (CSI) is available noncausally at both the transmitters and the receivers. This assumption includes common models of stationary ergodic channels; constant, frequency-selective channels; flat, rapidly varying channels; and flat slowly varying channels. In the second assumption set, which essentially features an independence, time average of expectation, and nonzeroness condition on the fading process, we constructively show how to achieve transport capacity of spl Omega (n) even when the CSI is unknown to both the transmitters and the receivers, provided that every node has an appropriately nearby node. This assumption set includes common models of independent and identically distributed (i.i.d.) channels; constant, flat channels; and constant, frequency-selective channels. The transport capacity is achieved by nodes communicating only with neighbors, and using only point-to-point coding. The thrust of these results is that the multihop strategy, toward which much protocol development activity is currently targeted, is appropriate for fading environments. The low attenuation regime is open.",
"An achievable bit rate per source-destination pair in a wireless network of n randomly located nodes is determined adopting the scaling limit approach of statistical physics. It is shown that randomly scattered nodes can achieve, with high probability, the same 1 radicn transmission rate of arbitrarily located nodes. This contrasts with previous results suggesting that a 1 radicnlogn reduced rate is the price to pay for the randomness due to the location of the nodes. The network operation strategy to achieve the result corresponds to the transition region between order and disorder of an underlying percolation model. If nodes are allowed to transmit over large distances, then paths of connected nodes that cross the entire network area can be easily found, but these generate excessive interference. If nodes transmit over short distances, then such crossing paths do not exist. Percolation theory ensures that crossing paths form in the transition region between these two extreme scenarios. Nodes along these paths are used as a backbone, relaying data for other nodes, and can transport the total amount of information generated by all the sources. A lower bound on the achievable bit rate is then obtained by performing pairwise coding and decoding at each hop along the paths, and using a time division multiple access scheme",
"We derive upper bounds on the transport capacity of wireless networks. The bounds obtained are solely dependent on the geographic locations and power constraints of the nodes. As a result of this derivation, we are able to conclude the optimality, in the sense of scaling of transport capacity with the number of nodes, of a multihop communication strategy for a class of network topologies.",
"n source and destination pairs randomly located in an area want to communicate with each other. Signals transmitted from one user to another at distance r apart are subject to a power loss of r-alpha as well as a random phase. We identify the scaling laws of the information-theoretic capacity of the network when nodes can relay information for each other. In the case of dense networks, where the area is fixed and the density of nodes increasing, we show that the total capacity of the network scales linearly with n. This improves on the best known achievability result of n2 3 of Aeron and Saligrama. In the case of extended networks, where the density of nodes is fixed and the area increasing linearly with n, we show that this capacity scales as n2-alpha 2 for 2lesalpha 4. Thus, much better scaling than multihop can be achieved in dense networks, as well as in extended networks with low attenuation. The performance gain is achieved by intelligent node cooperation and distributed multiple-input multiple-output (MIMO) communication. The key ingredient is a hierarchical and digital architecture for nodal exchange of information for realizing the cooperation.",
"In analyzing the point-to-point wireless channel, insights about two qualitatively different operating regimes-bandwidth and power-limited-have proven indispensable in the design of good communication schemes. In this paper, we propose a new scaling law formulation for wireless networks that allows us to develop a theory that is analogous to the point-to-point case. We identify fundamental operating regimes of wireless networks and derive architectural guidelines for the design of optimal schemes. Our analysis shows that in a given wireless network with arbitrary size, area, power, bandwidth, etc., there are three parameters of importance: the short-distance signal-to-noise ratio (SNR), the long-distance SNR, and the power path loss exponent of the environment. Depending on these parameters, we identify four qualitatively different regimes. One of these regimes is especially interesting since it is fundamentally a consequence of the heterogeneous nature of links in a network and does not occur in the point-to-point case; the network capacity is both power and bandwidth limited. This regime has thus far remained hidden due to the limitations of the existing formulation. Existing schemes, either multihop transmission or hierarchical cooperation, fail to achieve capacity in this regime; we propose a new hybrid scheme that achieves capacity."
]
}
|
1004.3714
|
2122426217
|
This paper develops upper bounds on the end-to-end transmission capacity of multihop wireless networks. Potential source-destination paths are dynamically selected from a pool of randomly located relays, from which a closed-form lower bound on the outage probability is derived in terms of the expected number of potential paths. This is in turn used to provide an upper bound on the number of successful transmissions that can occur per unit area, which is known as the transmission capacity. The upper bound results from assuming independence among the potential paths, and can be viewed as the maximum diversity case. A useful aspect of the upper bound is its simple form for an arbitrary-sized network, which allows insights into how the number of hops and other network parameters affect spatial throughput in the nonasymptotic regime. The outage probability analysis is then extended to account for retransmissions with a maximum number of allowed attempts. In contrast to prevailing wisdom, we show that predetermined routing (such as nearest neighbor) is suboptimal, since more hops are not useful once the network is interference-limited. Our results also make clear that randomness in the location of relay sets and dynamically varying channel states is helpful in obtaining higher aggregate throughput, and that dynamic route selection should be used to exploit path diversity.
|
If node locations are modeled as a homogeneous Poisson point process (HPPP), a number of results can be applied from stochastic geometry, e.g. @cite_30 @cite_5 , in particular to compute outage probability relative to a signal-to-interference-plus-noise ratio (SINR) threshold. These expressions can be inverted to give the maximum transmit intensity at a specified outage probability, which yields the transmission capacity of the network @cite_32 . This framework provides the maximum number of successful transmission the network can support while simultaneously meeting a network-wide QoS requirement. This framework allows closed-form expressions of achievable throughput to be derived in non-asymptotic regimes, which are useful in examining how various communication technique, channel models, and design parameters affect the aggregate throughput, e.g. @cite_1 @cite_26 @cite_20 @cite_11 @cite_27 @cite_25 @cite_29 @cite_28 ; see @cite_8 for a summary. While the transmission capacity can often be expressed in closed-form without resorting to asymptotics, it is a single-hop or snapshot'' metric. Recent work @cite_9 @cite_12 began to investigate the throughput scaling with two-hop opportunistic relay selection under different channel gain distribution and relay deployment. However, more general multi-hop capacity has not proven tractable.
|
{
"cite_N": [
"@cite_30",
"@cite_12",
"@cite_26",
"@cite_8",
"@cite_28",
"@cite_29",
"@cite_9",
"@cite_1",
"@cite_32",
"@cite_27",
"@cite_5",
"@cite_25",
"@cite_20",
"@cite_11"
],
"mid": [
"2132987440",
"2106558893",
"2168429069",
"2151792936",
"2149165606",
"2139956786",
"2088270194",
"2115094114",
"2095796369",
"2137079066",
"635250944",
"2130335471",
"2168339926",
"2131089922"
],
"abstract": [
"An Aloha-type access control mechanism for large mobile, multihop, wireless networks is defined and analyzed. This access scheme is designed for the multihop context, where it is important to find a compromise between the spatial density of communications and the range of each transmission. More precisely, the analysis aims at optimizing the product of the number of simultaneously successful transmissions per unit of space (spatial reuse) by the average range of each transmission. The optimization is obtained via an averaging over all Poisson configurations for the location of interfering mobiles, where an exact evaluation of signal over noise ratio is possible. The main mathematical tools stem from stochastic geometry and are spatial versions of the so-called additive and max shot noise processes. The resulting medium access control (MAC) protocol exhibits some interesting properties. First, it can be implemented in a decentralized way provided some local geographic information is available to the mobiles. In addition, its transport capacity is proportional to the square root of the density of mobiles which is the upper bound of Gupta and Kumar. Finally, this protocol is self-adapting to the node density and it does not require prior knowledge of this density.",
"We consider a time-slotted two-hop wireless system in which the sources transmit to the relays in the even time slots (first hop) and the relays forward the packets to the destinations in the odd time slots (second hop). Each source may connect to multiple relays in the first hop. In the presence of interference and without tight coordination of the relays, it is not clear which relays should transmit the packet. We propose four decentralized methods of relay selection, some based on location information and others based on the received signal strength (RSS).We provide a complete analytical characterization of these methods using tools from stochastic geometry. We use simulation results to compare these methods in terms of end-to-end success probability.",
"The transmission capacity (TC) of a wireless ad hoc network is defined as the maximum spatial intensity of successful transmissions such that the outage probability does not exceed some specified threshold. This work studies the improvement in TC obtainable with successive interference cancellation (SIC), an important receiver technique that has been shown to achieve the capacity of several classes of multiuser channels, but has not been carefully evaluated in the context of ad hoc wireless networks. This paper develops closed-form upper bounds and easily computable lower bounds for the TC of ad hoc networks with SIC receivers, for both perfect and imperfect SIC. The analysis applies to any multiuser receiver that cancels the K strongest interfering signals by a factor z isin [0, 1]. In addition to providing the first closed-form capacity results for SIC in ad hoc networks, design-relevant insights are made possible. In particular, it is shown that SIC should be used with direct sequence spread spectrum. Also, any imperfections in the interference cancellation rapidly degrade its usefulness. More encouragingly, only a few - often just one - interfering nodes need to be canceled in order to get the vast majority of the available performance gain.",
"This paper surveys and unifies a number of recent contributions that have collectively developed a metric for decentralized wireless network analysis known as transmission capacity. Although it is notoriously difficult to derive general end-to-end capacity results for multi-terminal or adhoc networks, the transmission capacity (TC) framework allows for quantification of achievable single-hop rates by focusing on a simplified physical MAC-layer model. By using stochastic geometry to quantify the multi-user interference in the network, the relationship between the optimal spatial density and success probability of transmissions in the network can be determined, and expressed-often fairly simply-in terms of the key network parameters. The basic model and analytical tools are first discussed and applied to a simple network with path loss only and we present tight upper and lower bounds on transmission capacity (via lower and upper bounds on outage probability). We then introduce random channels (fading shadowing) and give TC and outage approximations for an arbitrary channel distribution, as well as exact results for the special cases of Rayleigh and Nakagami fading. We then apply these results to show how TC can be used to better understand scheduling, power control, and the deployment of multiple antennas in a decentralized network. The paper closes by discussing shortcomings in the model as well as future research directions.",
"Spectrum sharing between wireless networks improves the efficiency of spectrum usage, and thereby alleviates spectrum scarcity due to growing demands for wireless broadband access. To improve the usual underutilization of the cellular uplink spectrum, this paper addresses spectrum sharing between a cellular uplink and a mobile ad hoc networks. These networks access either all frequency subchannels or their disjoint subsets, called spectrum underlay and spectrum overlay, respectively. Given these spectrum sharing methods, the capacity trade-off between the coexisting networks is analyzed based on the transmission capacity of a network with Poisson distributed transmitters. This metric is defined as the maximum density of transmitters subject to an outage constraint for a given signal-to-interference ratio (SIR). Using tools from stochastic geometry, the transmission-capacity trade-off between the coexisting networks is analyzed, where both spectrum overlay and underlay as well as successive interference cancellation (SIC) are considered. In particular, for small target outage probability, the transmission capacities of the coexisting networks are proved to satisfy a linear equation, whose coefficients depend on the spectrum sharing method and whether SIC is applied. This linear equation shows that spectrum overlay is more efficient than spectrum underlay. Furthermore, this result also provides insight into the effects of network parameters on transmission capacities, including link diversity gains, transmission distances, and the base station density. In particular, SIC is shown to increase the transmission capacities of both coexisting networks by a linear factor, which depends on the interference-power threshold for qualifying canceled interferers.",
"We study the transmission capacities of two coexisting wireless networks (a primary network vs. a secondary network) that operate in the same geographic region and share the same spectrum. We define transmission capacity as the product among the density of transmissions, the transmission rate, and the successful transmission probability (1 minus the outage probability). The primary (PR) network has a higher priority to access the spectrum without particular considerations for the secondary (SR) network, where the SR network limits its interference to the PR network by carefully controlling the density of its transmitters. Assuming that the nodes are distributed according to Poisson point processes and the two networks use different transmission ranges, we quantify the transmission capacities for both of these two networks and discuss their tradeoff based on asymptotic analysis. Our results show that if the PR network permits a small increase of its outage probability, the sum transmission capacity of the two networks (i.e., the overall spectrum efficiency per unit area) will be boosted significantly over that of a single network.",
"We consider transmission of packets in two-hop wireless ad hoc networks in which relay nodes are deployed between the source-destination pairs. Based on results from extreme value theory and product tails, we derive throughput scaling laws when opportunistic relay selection is performed. Assuming partial channel state information at each transmitter (CSIT) and decode- and-forward, half-duplex relays, we investigate how the per-hop throughput depends on the channel gain asymptotic distribution and the relay deployment. In dense networks with lambda t nodes per m 2 and fixed relay distances, we provide specific scaling laws for Rayleigh, lognormal, and Weibull fading, showing that the throughput is upper bounded by thetas(radic(lambda t )). Interestingly, with variable relay distances and location-aware relay selection, we analytically show that regularly varying channel distributions result in enhanced multi-relay diversity gain, achieving linear throughput scaling thetas(radic(lambda t )).",
"This paper addresses three issues in the field of ad hoc network capacity: the impact of (i) channel fading, (ii) channel inversion power control, and (iii) threshold-based scheduling on capacity. Channel inversion and threshold scheduling may be viewed as simple ways to exploit channel state information (CSI) without requiring cooperation across transmitters. We use the transmission capacity (TC) as our metric, defined as the maximum spatial intensity of successful simultaneous transmissions subject to a constraint on the outage probability (OP). By assuming the nodes are located on the infinite plane according to a Poisson process, we are able to employ tools from stochastic geometry to obtain asymptotically tight bounds on the distribution of the signal-to-interference (SIR) level, yielding in turn tight bounds on the OP (relative to a given SIR threshold) and the TC. We demonstrate that in the absence of CSI, fading can significantly reduce the TC and somewhat surprisingly, channel inversion only makes matters worse. We develop a threshold-based transmission rule where transmitters are active only if the channel to their receiver is acceptably strong, obtain expressions for the optimal threshold, and show that this simple, fully distributed scheme can significantly reduce the effect of fading.",
"In this paper, upper and lower bounds on the transmission capacity of spread-spectrum (SS) wireless ad hoc networks are derived. We define transmission capacity as the product of the maximum density of successful transmissions multiplied by their data rate, given an outage constraint. Assuming that the nodes are randomly distributed in space according to a Poisson point process, we derive upper and lower bounds for frequency hopping (FH-CDMA) and direct sequence (DS-CDMA) SS networks, which incorporate traditional modulation types (no spreading) as a special case. These bounds cleanly summarize how ad hoc network capacity is affected by the outage probability, spreading factor, transmission power, target signal-to-noise ratio (SNR), and other system parameters. Using these bounds, it can be shown that FH-CDMA obtains a higher transmission capacity than DS-CDMA on the order of M sup 1-2 spl alpha , where M is the spreading factor and spl alpha >2 is the path loss exponent. A tangential contribution is an (apparently) novel technique for obtaining tight bounds on tail probabilities of additive functionals of homogeneous Poisson point processes.",
"This paper derives the outage probability and transmission capacity of ad hoc wireless networks with nodes employing multiple antenna diversity techniques, for a general class of signal distributions. This analysis allows system performance to be quantified for fading or non-fading environments. The transmission capacity is given for interference-limited uniformly random networks on the entire plane with path loss exponent alpha > 2 in which nodes use: (1) static beamforming through M sectorized antennas, for which the increase in transmission capacity is shown to be thetas(M2) if the antennas are without sidelobes, but less in the event of a nonzero sidelobe level; (2) dynamic eigenbeamforming (maximal ratio transmission combining), in which the increase is shown to be thetas(M 2 alpha ); (3) various transmit antenna selection and receive antenna selection combining schemes, which give appreciable but rapidly diminishing gains; and (4) orthogonal space-time block coding, for which there is only a small gain due to channel hardening, equivalent to Nakagami-m fading for increasing m. It is concluded that in ad hoc networks, static and dynamic beamforming perform best, selection combining performs well but with rapidly diminishing returns with added antennas, and that space-time block coding offers only marginal gains.",
"Preface. Preface to Volume II. Contents of Volume II. Part IV Medium Access Control 1 Spatial Aloha: the Bipole Model 2 Receiver Selection in Spatial 3 Carrier Sense Multiple 4 Code Division Multiple Access in Cellular Networks Bibliographical Notes on Part IV. Part V Multihop Routing in Mobile ad Hoc Networks: 5 Optimal Routing 6 Greedy Routing 7 Time-Space Routing Bibliographical Notes on Part V. Part VI Appendix:Wireless Protocols and Architectures: 8 RadioWave Propagation 9 Signal Detection 10 Wireless Network Architectures and Protocols Bibliographical Notes on Part VI Bibliography Table of Notation Index.",
"The transmission capacity of an ad-hoc network is the maximum density of active transmitters in an unit area, given an outage constraint at each receiver for a fixed rate of transmission. Assuming channel state information is available at the receiver, this paper presents bounds on the transmission capacity as a function of the number of antennas used for transmission, and the spatial receive degrees of freedom used for interference cancelation at the receiver. Canceling the strongest interferers, using a single antenna for transmission together with using all but one spatial receive degrees of freedom for interference cancelation is shown to maximize the transmission capacity. Canceling the closest interferers, using a single antenna for transmission together with using a fraction of the total spatial receive degrees of freedom for interference cancelation depending on the path loss exponent, is shown to maximize the transmission capacity.",
"Multicast transmission, wherein the same packet must be delivered to multiple receivers, is an important aspect of sensor and tactical networks and has several distinctive traits as opposed to more commonly studied unicast networks. Specially, these include 1) identical packets must be delivered successfully to several nodes, 2) outage at any receiver requires the packet to be retransmitted at least to that receiver, and 3) the multicast rate is dominated by the receiver with the weakest link in order to minimize outage and retransmission. A first contribution of this paper is the development of a tractable multicast model and throughput metric that captures each of these key traits in a multicast wireless network. We utilize a Poisson cluster process (PCP) consisting of a distinct Poisson point process (PPP) for the transmitters and receivers, and then define the multicast transmission capacity (MTC) as the maximum achievable multicast rate per transmission attempt times the maximum intensity of multicast clusters under decoding delay and multicast outage constraints. A multicast cluster is a contiguous area over which a packet is multicasted, and to reduce outage it can be tessellated into v smaller regions of multicast. The second contribution of the paper is the analysis of several key aspects of this model, for which we develop the following main result. Assuming τ v transmission attempts are allowed for each tessellated region in a multicast cluster, we show that the MTC is Θ(ρkxlog(k)vy) where ρ, x and y are functions of τ and v depending on the network size and intensity, and k is the average number of the intended receivers in a cluster. We derive ρ, x, y for a number of regimes of interest, and also show that an appropriate number of retransmissions can significantly enhance the MTC.",
"The performance benefits of two interference cancellation methods, successive interference cancellation (SIC) and joint detection (JD), in wireless ad hoc networks are compared within the transmission capacity framework. SIC involves successively decoding and subtracting out strong interfering signals until the desired signal can be decoded, while higher-complexity JD refers to simultaneously decoding the desired signal and the signals of a few strong interferers. Tools from stochastic geometry are used to develop bounds on the outage probability as a function of the spatial density of interferers. These bounds show that SIC performs nearly as well as JD when the signal-to-interference ratio (SIR) threshold is less than one, but that SIC is essentially useless for SIR thresholds larger than one whereas JD provides a significant outage benefit regardless of the SIR threshold."
]
}
|
1004.3714
|
2122426217
|
This paper develops upper bounds on the end-to-end transmission capacity of multihop wireless networks. Potential source-destination paths are dynamically selected from a pool of randomly located relays, from which a closed-form lower bound on the outage probability is derived in terms of the expected number of potential paths. This is in turn used to provide an upper bound on the number of successful transmissions that can occur per unit area, which is known as the transmission capacity. The upper bound results from assuming independence among the potential paths, and can be viewed as the maximum diversity case. A useful aspect of the upper bound is its simple form for an arbitrary-sized network, which allows insights into how the number of hops and other network parameters affect spatial throughput in the nonasymptotic regime. The outage probability analysis is then extended to account for retransmissions with a maximum number of allowed attempts. In contrast to prevailing wisdom, we show that predetermined routing (such as nearest neighbor) is suboptimal, since more hops are not useful once the network is interference-limited. Our results also make clear that randomness in the location of relay sets and dynamically varying channel states is helpful in obtaining higher aggregate throughput, and that dynamic route selection should be used to exploit path diversity.
|
If several other strong assumptions are made, e.g. that all relays are placed equi-distant on a straight line and all outages are independent, then closed-form multi-hop transmission capacity can be derived @cite_22 . Stamatiou @cite_6 also investigated multihop routing in a Poisson spatial model, whose focus is to characterize the end-to-end delay and stability, again based on predetermined routes. Other recent works analyzing the throughput of multihop networks using stochastic geometric tools include @cite_0 , which extended @cite_22 to non-slotted ALOHA and @cite_23 , which also adopted a similar framework to @cite_23 to study the throughput-delay-reliability tradeoff with an ARQ protocol, and didn't require all hops to be equidistant. However, all of these used predetermined routing selection. In fact, the outage of a predetermined route does not preclude the possibility of successful communication over other routes. Separately, multihop capacity has also been studied in a line network without explicitly considering additional interference @cite_24 @cite_17 . This approach is helpful in comparing the impact of additional hops in bandwidth and power-limited networks, but fails to account for the interference inherent in a large wireless network.
|
{
"cite_N": [
"@cite_22",
"@cite_6",
"@cite_0",
"@cite_24",
"@cite_23",
"@cite_17"
],
"mid": [
"2114106914",
"1979262703",
"2071787409",
"2020650131",
"2160647801",
"2124107362"
],
"abstract": [
"We develop a new metric for quantifying end-to-end throughput in multihop wireless networks, which we term random access transport capacity, since the interference model presumes uncoordinated transmissions. The metric quantifies the average maximum rate of successful end-to-end transmissions, multiplied by the communication distance, and normalized by the network area. We show that a simple upper bound on this quantity is computable in closed-form in terms of key network parameters when the number of retransmissions is not restricted and the hops are assumed to be equally spaced on a line between the source and destination. We also derive the optimum number of hops and optimal per hop success probability and show that our result follows the well-known square root scaling law while providing exact expressions for the preconstants, which contain most of the design-relevant network parameters. Numerical results demonstrate that the upper bound is accurate for the purpose of determining the optimal hop count and success (or outage) probability.",
"We consider a network where each route comprises a backlogged source, a number of relays and a destination at a finite distance. The locations of the sources and the relays are realizations of independent Poisson point processes. Given that the nodes observe a TDMA ALOHA MAC protocol, our objective is to determine the number of relays and their placement such that the mean end-to-end delay in a typical route of the network is minimized.We first study an idealistic network model where all routes have the same number of hops, the same distance per hop and their own dedicated relays. Combining tools from queueing theory and stochastic geometry, we provide a precise characterization of the mean end-to-end delay. We find that the delay is minimized if the first hop is much longer than the remaining hops and that the optimal number of hops scales sublinearly with the source-destination distance. Simulating the original network scenario reveals that the analytical results are accurate, provided that the density of the relay process is sufficiently large. We conclude that, given the considered MAC protocol, our analysis provides a delay-minimizing routing strategy for random, multihop networks involving a small number of hops.",
"This paper presents the evaluation of the multi-hop aggregate information efficiency of the slotted and unslotted ALOHA protocols. We consider a multi-hop wireless network where the nodes are spatially characterized by a Poisson point process and the traffic generation also follows a Poisson distribution. By applying the properties of stochastic geometry, we derive a closed-form lower bound on the outage probability as a function of the required communication rate, the single-hop distance, the number of hops and the maximum number of retransmissions. The results indicate that slotted ALOHA always outperforms its unslotted version, demonstrating the importance of synchronization in distributed networks. In addition, we show that it is always possible to optimize the network efficiency by properly setting the required rate for a given packet density. Finally, in the scenario considered, the use of retransmissions and multiple hops never achieves the best performance if compared to the option of single-hop links without retransmissions.",
"The goal of this paper is to establish which practical routing schemes for wireless networks are most suitable for power-limited and bandwidth-limited communication regimes. We regard channel state information (CSI) at the receiver and point-to-point capacity-achieving codes for the additive white Gaussian noise (AWGN) channel as practical features, interference cancellation (IC) as possible, but less practical, and synchronous cooperation (CSI at the transmitters) as impractical. We consider a communication network with a single source node, a single destination node, and N-1 intermediate nodes placed equidistantly on a line between them. We analyze the minimum total transmit power needed to achieve a desired end-to-end rate for several schemes and demonstrate that multihop communication with spatial reuse performs very well in the power-limited regime, even without IC. However, within a class of schemes not performing IC, single-hop transmission (directly from source to destination) is more suitable for the bandwidth-limited regime, especially when higher spectral efficiencies are required. At such higher spectral efficiencies, the gap between single-hop and multihop can be closed by employing IC, and we present a scheme based upon backward decoding that can remove all interference from the multihop system with an arbitrarily small rate loss. This new scheme is also used to demonstrate that rates of O(log N) are achievable over linear wireless networks even without synchronous cooperation.",
"Delay-reliability (D-R), and throughput-delay-reliability (T-D-R) tradeoffs in an ad hoc network are derived for single hop and multi-hop transmission with automatic repeat request (ARQ) on each hop. The delay constraint is modeled by assuming that each packet is allowed at most D retransmissions end-to-end, and the reliability is defined as the probability that the packet is successfully decoded in at most D retransmissions. The throughput of the ad hoc network is characterized by the transmission capacity, where the transmission capacity is defined to be the maximum density of spatial transmissions that can be simultaneously supported in an ad hoc network under quality of service (QoS) constraints (maximum retransmissions and reliability). The transmission capacity captures the T-D-R tradeoff as it incorporates the dependence between the throughput, the maximum delay, and the reliability. Given an end-to-end retransmission constraint of D, the optimal allocation of the number of retransmissions allowed at each hop is derived that maximizes a lower bound on the transmission capacity. Optimizing over the number of hops, single hop transmission is shown to be optimal for maximizing a lower bound on the transmission capacity in the sparse network regime.",
"We consider a frequency-flat fading multihop network with a single active source-destination pair terminals communicating over multiple hops through a set of intermediate relay terminals. We use Shannon-theoretic tools to analyze the tradeoff between energy efficiency and spectral efficiency (known as the power-bandwidth tradeoff) for a simple communication protocol based on time-division decode-and-forward relaying in meaningful asymptotic regimes of signal-to-noise ratio (SNR) under a system-wide power constraint on source and relay transmissions. The impact of multi-hopping and channel fading on the key performance measures of the high and low SNR regimes is investigated to shed new light on the possible enhancements in power bandwidth efficiency and link reliability. In contrast to the common belief that in fading environments communicating over multiple hops suffers significantly in performance due to the worst link limitation, our results indicate that hopping could significantly improve the outage behavior over slow-fading networks and stabilize links against the random channel fluctuations. In particular, we prove that there exists an optimal number of hops that minimizes the end-to-end outage probability and characterize the dependence of this optimal number on the fading statistics and target energy and spectral efficiencies."
]
}
|
1004.3714
|
2122426217
|
This paper develops upper bounds on the end-to-end transmission capacity of multihop wireless networks. Potential source-destination paths are dynamically selected from a pool of randomly located relays, from which a closed-form lower bound on the outage probability is derived in terms of the expected number of potential paths. This is in turn used to provide an upper bound on the number of successful transmissions that can occur per unit area, which is known as the transmission capacity. The upper bound results from assuming independence among the potential paths, and can be viewed as the maximum diversity case. A useful aspect of the upper bound is its simple form for an arbitrary-sized network, which allows insights into how the number of hops and other network parameters affect spatial throughput in the nonasymptotic regime. The outage probability analysis is then extended to account for retransmissions with a maximum number of allowed attempts. In contrast to prevailing wisdom, we show that predetermined routing (such as nearest neighbor) is suboptimal, since more hops are not useful once the network is interference-limited. Our results also make clear that randomness in the location of relay sets and dynamically varying channel states is helpful in obtaining higher aggregate throughput, and that dynamic route selection should be used to exploit path diversity.
|
In addition, the above-mentioned diversity gain from dynamic relay selection has been utilized for opportunistic routing @cite_15 @cite_2 so any node that overhears packets can participate in forwarding. Reference @cite_2 appeared to be the first investigation of the capacity improvement from opportunistic routing compared with predetermined routing in a Poisson field. However, the performance gain shown in @cite_2 is based on simulation without an exact mathematical derivation. Different random hop selection strategies have also been studied and compared @cite_3 @cite_19 without giving tractable throughput bounds. Hence, characterizing the available diversity gain is worth investigating. In this paper, we will explicitly show that since a pool of randomly located relays with varying channels provides more potential routes, more randomness is preferable.
|
{
"cite_N": [
"@cite_19",
"@cite_15",
"@cite_3",
"@cite_2"
],
"mid": [
"2149622712",
"2104911317",
"2043288730",
"2155564629"
],
"abstract": [
"This paper addresses the routing problem for large wireless networks of randomly distributed nodes with Rayleigh fading channels. First, we establish that the distances between neighboring nodes in a Poisson point process follow a generalized Rayleigh distribution. Based on this result, it is then shown that, given an end-to-end packet delivery probability (as a quality of service requirement), the energy benefits of routing over many short hops are significantly smaller than for deterministic network models that are based on the geometric disk abstraction. If the permissible delay for short-hop routing and long-hop routing is the same, it turns out that routing over fewer but longer hops may even outperform nearest-neighbor routing, in particular for high end-to-end delivery probabilities.",
"This paper describes Extremely Opportunistic Routing (ExOR), a new unicast routing technique for multi-hop wireless networks. ExOR forwards each packet through a sequence of nodes, deferring the choice of each node in the sequence until after the previous node has transmitted the packet on its radio. ExOR then determines which node, of all the nodes that successfully received that transmission, is the node closest to the destination. That closest node transmits the packet. The result is that each hop moves the packet farther (or average) than the hops of the best possible pre-determined route.The ExOR design addresses the challenge of choosing a forwarding node after transmission using a distributed algorithm. First, when a node transmits a packet, it includes in the packet a simple schedule describing the priority order in which the potential receivers should forward the packet. The node computes the schedule based on shared measurements of inter-node delivery rates. ExOR then uses a distributed slotted MAC protocol for acknowledgements to ensure that the receivers agree who the highest priority receiver was.The efficacy of ExOR depends mainly on the rate at which the reception probability falls off with distance. Simulations based on measured radio characteristics [6] suggest that ExOR reduces the total number of transmissions by nearly a factor of two over the best possible pre-determined route.",
"The multihop spatial reuse Aloha (MSR-Aloha) protocol was recently introduced by Baccelli et aL, where each transmitter selects the receiver among its feasible next hops that maximizes the forward progress of the head of line packet towards its final destination. They identify the optimal medium access probability (MAP) that maximizes the spatial density of progress, defined as the product of the spatial intensity of attempted transmissions times the average per-hop progress of each packet towards its destination. We propose a variant called longest edge routing where each transmitter selects its longest feasible edge, and then identifies a packet in its backlog whose next hop is the associated receiver. The main contribution of this work (and of Baccelli et aL) is the use of stochastic geometry to identify the optimal MAP and the corresponding optimal spatial density of progress.",
"In classical routing strategies for multihop mobile wireless networks packets are routed on a pre-defined route usually obtained by a shortest path routing protocol. In opportunistic routing schemes, for each packet and each hop, the next relay is found by dynamically selecting the node that captures the packet transmission and which is the nearest to the destination. Such a scheme allows each packet to take advantage of the local pattern of transmissions and fadings at any slot and at any hop. The aim of this paper is to quantify and optimize the potential performance gains of such opportunistic routing strategies compared with classical routing schemes. The analysis is conducted under the following lower layer assumptions: the Medium Access (MAC) layer is a spatial version of Aloha which has been shown to scale well for large multihop networks; the capture of a packet by some receiver is determined by the Signal over Interference and Noise Ratio (SINR) experienced by the receiver. The paper contains a detailed simulation study which shows that such time-space opportunistic schemes very significantly outperform classical routing schemes. It also contains a mathematical study where we show how to optimally tune the MAC parameters so as to minimize the average number of time slots required to carry a typical packet from origin to destination on long paths. We show that this optimization is independent of network density."
]
}
|
1004.2880
|
1904002074
|
The coalition structure formation problem represents an active research area in multi-agent systems. A coalition structure is defined as a partition of the agents involved in a system into disjoint coalitions. The problem of finding the optimal coalition structure is NP-complete. In order to find the optimal solution in a combinatorial optimization problem it is theoretically possible to enumerate the solutions and evaluate each. But this approach is infeasible since the number of solutions often grows exponentially with the size of the problem. In this paper we present a greedy adaptive search procedure (GRASP) to efficiently search the space of coalition structures in order to find an optimal one. Experiments and comparisons to other algorithms prove the validity of the proposed method in solving this hard combinatorial problem.
|
A deterministic algorithm must systematically explore the search space of candidate solutions. One of the first algorithm returning an optimal solution is the dynamic programming algorithm (DP) proposed in @cite_4 for the set partitioning problem.This algorithm is polynomial in the size of the input ( @math ) and it runs in @math time, which is significantly less than an exhaustive enumeration ( @math ). However, DP is not an anytime algorithm, and has a large memory requirement. Indeed, for each coalition @math it computes @math and @math . It computes all the possible splits of the coalition @math and assign to @math the best split and to @math its value. @cite_2 the authors proposed an improved version of the DP algorithm (IDP) performing fewer operations and requiring less memory than DP. IDP, as shown by the authors, is considered one of the fastest available exact algorithm in the literature computing an optimal solution.
|
{
"cite_N": [
"@cite_4",
"@cite_2"
],
"mid": [
"2015062753",
"1501141062"
],
"abstract": [
"The complete set partitioning (CSP) problem is a special case of the set partitioning problem where the coefficient matrix has 2 m −1 columns, each column being a binary representation of a unique integer between 1 and 2 m −1,m⩾1. It has wide applications in the area of corporate tax structuring in operations research. In this paper we propose a dynamic programming approach to solve the CSP problem, which has time complexityO(3 m ), wheren=2 m −1 is the size of the problem space.",
"Forming effective coalitions is a major research challenge in the field of multi-agent systems. Central to this endeavour is the problem of partitioning the set of agents into exhaustive and disjoint coalitions such that the social welfare is maximized. This coalition structure generation problem is extremely challenging due to the exponential number of partitions that need to be examined. Specifically, given n agents, there are O(nn) possible partitions. To date, the only algorithm that can find an optimal solution in O(3n) is the Dynamic Programming (DP) algorithm, due to However, one of the main limitations of DP is that it requires a significant amount of memory. In this paper, we devise an Improved Dynamic Programming algorithm (IDP) that is proved to perform fewer operations than DP (e.g. 38.7 of the operations given 25 agents), and is shown to use only 33.3 of the memory in the best case, and 66.6 in the worst."
]
}
|
1004.2880
|
1904002074
|
The coalition structure formation problem represents an active research area in multi-agent systems. A coalition structure is defined as a partition of the agents involved in a system into disjoint coalitions. The problem of finding the optimal coalition structure is NP-complete. In order to find the optimal solution in a combinatorial optimization problem it is theoretically possible to enumerate the solutions and evaluate each. But this approach is infeasible since the number of solutions often grows exponentially with the size of the problem. In this paper we present a greedy adaptive search procedure (GRASP) to efficiently search the space of coalition structures in order to find an optimal one. Experiments and comparisons to other algorithms prove the validity of the proposed method in solving this hard combinatorial problem.
|
Both DP and IDP are not anytime algorithms, they cannot be interrupted before their normal termination. @cite_1 , have presented the first anytime algorithm, sketched in Algorithm , that can be interrupted to obtain a solution within a time limit but not guaranteed to be optimal. When not interrupted it returns the optimal solution. The coalition structure generation process can be viewed as a search in a coalition structure graph as reported in Figure . One desideratum is to be able to guarantee that the coalition structure is within a worst case bound from optimal, i.e. that searching through a subset @math of coalition structures, is finite, and as small as possible, where @math is the best CS and @math is the best CS that has been seen in the subset @math . @cite_1 has been proved that: to bound @math , it suffices to search the lowest two levels of the coalition structure graph (with this search, the bound @math , and the number of nodes searched is @math ); this bound is tight; and, no other search algorithm can establish any bound @math while searching only @math nodes or fewer.
|
{
"cite_N": [
"@cite_1"
],
"mid": [
"2156887976"
],
"abstract": [
"Coalition formation is a key topic in multiagent systems. One may prefer a coalition structure that maximizes the sum of the values of the coalitions, but often the number of coalition structures is too large to allow exhaustive search for the optimal one. Furthermore, finding the optimal coalition structure is NP-complete. But then, can the coalition structure found via a partial search be guaranteed to be within a bound from optimum? We show that none of the previous coalition structure generation algorithms can establish any bound because they search fewer nodes than a threshold that we show necessary for establishing a bound. We present an algorithm that establishes a tight bound within this minimal amount of search, and show that any other algorithm would have to search strictly more. The fraction of nodes needed to be searched approaches zero as the number of agents grows. If additional time remains, our anytime algorithm searches further, and establishes a progressively lower tight bound. Surprisingly, just searching one more node drops the bound in half. As desired, our algorithm lowers the bound rapidly early on, and exhibits diminishing returns to computation. It also significantly outperforms its obvious contenders. Finally, we show how to distribute the desired search across self-interested manipulative agents. © 1999 Elsevier Science B.V. All rights reserved."
]
}
|
1004.2880
|
1904002074
|
The coalition structure formation problem represents an active research area in multi-agent systems. A coalition structure is defined as a partition of the agents involved in a system into disjoint coalitions. The problem of finding the optimal coalition structure is NP-complete. In order to find the optimal solution in a combinatorial optimization problem it is theoretically possible to enumerate the solutions and evaluate each. But this approach is infeasible since the number of solutions often grows exponentially with the size of the problem. In this paper we present a greedy adaptive search procedure (GRASP) to efficiently search the space of coalition structures in order to find an optimal one. Experiments and comparisons to other algorithms prove the validity of the proposed method in solving this hard combinatorial problem.
|
As regards the approximate algorithms, in @cite_3 has been proposed a solution based on a genetic algorithm, which performs well when there is some regularity in the search space. Indeed, the authors assume, in order to apply their algorithm, that the value of a coalition is dependent of other coalitions in the coalition structure, making the algorithm not well suited for the general case. A new solution @cite_7 is based on a Simulated Annealing algorithm, a widely used stochastic local search method. At each iteration the algorithm selects a random neighbour solution @math of a CS @math . The search proceeds with an adjacent CS @math of the original CS @math if @math yields a better social welfare than @math . Otherwise, the search is continued with @math with probability @math , where @math is the temperature parameter that decreases according to the annealing schedule @math .
|
{
"cite_N": [
"@cite_7",
"@cite_3"
],
"mid": [
"1583835954",
"2139662247"
],
"abstract": [
"We present a simulated annealing algorithm for coalition formation represented as characteristic function games. We provide a detailed analysis of various neighbourhoods for the algorithm, and of their effects to the algorithm's performance. Our practical experiments and comparisons with other methods demonstrate that simulated annealing provides a useful tool to tackle the combinatorics involved in multi-agent coalition formation.",
"Coalition formation has been a very active area of research in multiagent systems. Most of this research has concentrated on decentralized procedures that allow self-interested agents to negotiate the formation of coalitions and division of coalition payoffs. A different line of research has addressed the problem of finding the optimal division of agents into coalitions such that the sum total of the the payoffs to all the coalitions is maximized (Larson and Sandholm, 1999). This is the optimal coalition structure identification problem. Deterministic search algorithms have been proposed and evaluated under the assumption that the performance of a coalition is independent of other coalitions. We use an order-based genetic algorithm (OBGA) as a stochastic search process to identify the optimal coalition structure. We compare the performance of the OBGA with a representative deterministic algorithm presented in the literature. Though the OBGA has no performance guarantees, it is found to dominate the deterministic algorithm in a significant number of problem settings. An additional advantage of the OBGA is its scalability to larger problem sizes and to problems where performance of a coalition depends on other coalitions in the environment."
]
}
|
1004.2242
|
1497657497
|
We present a new global optimization algorithm in which the influence of the leaders in social groups is used as an inspiration for the evolutionary technique which is designed into a group architecture. To demonstrate the efficiency of the method, a standard suite of single and multi-dimensional optimization functions along with the energies and the geometric structures of Lennard-Jones clusters are given as well as the application of the algorithm on quantum circuit design problems. We show that as an improvement over previous methods, the algorithm scales as N 2.5 for the Lennard-Jones clusters of N-particles. In addition, an efficient circuit design is shown for a two-qubit Grover search algorithm which is a quantum algorithm providing quadratic speedup over the classical counterpart.
|
In PEAs the whole population forms in a distributed way and consists multiple subpopulation. Single-population master-slaves, multiple populations, fine-grained and hierarchical combinations are the main types of PEAs @cite_28 . The proposed algorithm in this paper differs from the PEAs in that all members of the population are interacting and there is a mutual effect between the members of the population in addition to leaders' effect on individuals in their groups. The sum of all these interactions forms the evolutionary technique. However, in PEAs, in most cases, the interaction between subpopulations is made with the migration of individuals and evolutionary techniques used for subpopulations can be independent from each other.
|
{
"cite_N": [
"@cite_28"
],
"mid": [
"1544076857"
],
"abstract": [
"Parallel Evolutionary Optimization.- A Model for Parallel Operators in Genetic Algorithms.- Parallel Evolutionary Multiobjective Optimization.- Parallel Hardware for Genetic Algorithms.- A Reconfigurable Parallel Hardware for Genetic Algorithms.- Reconfigurable Computing and Parallelism for Implementing and Accelerating Evolutionary Algorithms.- Distributed Evolutionary Computation.- Performance of Distributed GAs on DNA Fragment Assembly.- On Parallel Evolutionary Algorithms on the Computational Grid.- Parallel Evolutionary Algorithms on Consumer-Level Graphics Processing Unit.- Parallel Particle Swarm Optimization.- Intelligent Parallel Particle Swarm Optimization Algorithms.- Parallel Ant Colony Optimization for 3D Protein Structure Prediction using the HP Lattice Model."
]
}
|
1004.2285
|
2086498796
|
We propose an optimization approach to design cost-effective electrical power transmission networks. That is, we aim to select both the network structure and the line conductances (line sizes) so as to optimize the trade-off between network efficiency (low power dissipation within the transmission network) and the cost to build the network. We begin with a convex optimization method based on the paper “Minimizing Effective Resistance of a Graph” [Ghosh, Boyd & Saberi]. We show that this (DC) resistive network method can be adapted to the context of AC power flow. However, that does not address the combinatorial aspect of selecting network structure. We approach this problem as selecting a subgraph within an over-complete network, posed as minimizing the (convex) network power dissipation plus a non-convex cost on line conductances that encourages sparse networks where many line conductances are set to zero. We develop a heuristic approach to solve this non-convex optimization problem using: (1) a continuation method to interpolate from the smooth, convex problem to the (non-smooth, non-convex) combinatorial problem, (2) the majorization-minimization algorithm to perform the necessary intermediate smooth but non-convex optimization steps. Ultimately, this involves solving a sequence of convex optimization problems in which we iteratively reweight a linear cost on line conductances to fit the actual non-convex cost. Several examples are presented which suggest that the overall method is a good heuristic for network design. We also consider how to obtain sparse networks that are still robust against failures of lines and or generators.
|
The initial inspiration for our approach was the convex network optimization methods of Ghosh, Boyd and Saberi @cite_9 . Building on earlier work @cite_8 , they consider the problem of minimizing the total resistance of an electrical network subject to a linear budget on line conductances, where they interpret the total resistance metric as the expected power dissipation within the network under a random current model. We extend their work by also selecting the network structure. We impose sparsity on that structure in a manner similar to a number of methods that modify a convex optimization problem by adding some non-convex regularization to obtain sparser solutions, such as in compressed sensing @cite_10 @cite_2 @cite_0 or edge-preserving image restoration @cite_12 . The method of Candes @cite_10 is especially relevant to our approach. They recommend the majorization-minimization algorithm @cite_11 as a heuristic approach to sparsity-favoring non-convex optimization.
|
{
"cite_N": [
"@cite_8",
"@cite_9",
"@cite_0",
"@cite_2",
"@cite_10",
"@cite_12",
"@cite_11"
],
"mid": [
"2100032953",
"1987717935",
"2168745297",
"2004544971",
"2107861471",
"2163705594",
"2149414429"
],
"abstract": [
"We consider the problem of determining optimal wire widths for a power or ground network, subject to limits on wire widths, voltage drops, total wire area, current density, and power dissipation. To account for the variation of the current demand, we model it as a random vector with known statistics, possibly including correlation between subsystem currents. Other researchers have shown that when the variation in the current is not taken into account, the optimal network topology is a tree. A tree topology is, however, almost never used in practice, because it is not robust with respect to variations in the lock currents. We show that when the current variation is taken into account, the optimal network is usually not a tree. We formulate a heuristic method based on minimizing a linear combination of total average power and total wire area. We show that this results in designs that obey the reliability constraints, occupy small area, and most importantly are robust against variations in block currents. The problem can be formulated as a nonlinear convex optimization problem that can be globally solved very effciently.",
"The effective resistance between two nodes of a weighted graph is the electrical resistance seen between the nodes of a resistor network with branch conductances given by the edge weights. The effective resistance comes up in many applications and fields in addition to electrical network analysis, including, for example, Markov chains and continuous-time averaging networks. In this paper we study the problem of allocating edge weights on a given graph in order to minimize the total effective resistance, i.e., the sum of the resistances between all pairs of nodes. We show that this is a convex optimization problem and can be solved efficiently either numerically or, in some cases, analytically. We show that optimal allocation of the edge weights can reduce the total effective resistance of the graph (compared to uniform weights) by a factor that grows unboundedly with the size of the graph. We show that among all graphs with @math nodes, the path has the largest value of optimal total effective resistance and the complete graph has the least.",
"The theory of compressive sensing has shown that sparse signals can be reconstructed exactly from many fewer measurements than traditionally believed necessary. In [1], it was shown empirically that using lscrp minimization with p < 1 can do so with fewer measurements than with p = 1. In this paper we consider the use of iteratively reweighted algorithms for computing local minima of the nonconvex problem. In particular, a particular regularization strategy is found to greatly improve the ability of a reweighted least-squares algorithm to recover sparse signals, with exact recovery being observed for signals that are much less sparse than required by an unregularized version (such as FOCUSS, [2]). Improvements are also observed for the reweighted-lscr1 approach of [3].",
"Several authors have shown recently that It is possible to reconstruct exactly a sparse signal from fewer linear measurements than would be expected from traditional sampling theory. The methods used involve computing the signal of minimum lscr1 norm among those having the given measurements. We show that by replacing the lscr1 norm with the lscrp norm with p < 1, exact reconstruction is possible with substantially fewer measurements. We give a theorem in this direction, and many numerical examples, both in one complex dimension, and larger-scale examples in two real dimensions.",
"It is now well understood that (1) it is possible to reconstruct sparse signals exactly from what appear to be highly incomplete sets of linear measurements and (2) that this can be done by constrained l1 minimization. In this paper, we study a novel method for sparse signal recovery that in many situations outperforms l1 minimization in the sense that substantially fewer measurements are needed for exact recovery. The algorithm consists of solving a sequence of weighted l1-minimization problems where the weights used for the next iteration are computed from the value of the current solution. We present a series of experiments demonstrating the remarkable performance and broad applicability of this algorithm in the areas of sparse signal recovery, statistical estimation, error correction and image processing. Interestingly, superior gains are also achieved when our method is applied to recover signals with assumed near-sparsity in overcomplete representations—not by reweighting the l1 norm of the coefficient sequence as is common, but by reweighting the l1 norm of the transformed object. An immediate consequence is the possibility of highly efficient data acquisition protocols by improving on a technique known as Compressive Sensing.",
"This paper deals with convex half-quadratic criteria and associated minimization algorithms for the purpose of image restoration. It brings a number of original elements within a unified mathematical presentation based on convex duality. Firstly, the Geman and Yang (1995) and Geman and Reynolds (1992) constructions are revisited, with a view to establishing the convexity properties of the resulting half-quadratic augmented criteria, when the original nonquadratic criterion is already convex. Secondly, a family of convex Gibbsian energies that incorporate interacting auxiliary variables is revealed as a potentially fruitful extension of the Geman and Reynolds construction.",
"Most problems in frequentist statistics involve optimization of a function such as a likelihood or a sum of squares. EM algorithms are among the most effective algorithms for maximum likelihood estimation because they consistently drive the likelihood uphill by maximizing a simple surrogate function for the log-likelihood. Iterative optimization of a surrogate function as exemplified by an EM algorithm does not necessarily require missing data. Indeed, every EM algorithm is a special case of the more general class of MM optimization algorithms, which typically exploit convexity rather than missing data in majorizing or minorizing an objective function. In our opinion, MM algorithms deserve to be part of the standard toolkit of professional statisticians. This article explains the principle behind MM algorithms, suggests some methods for constructing them, and discusses some of their attractive features. We include numerous examples throughout the article to illustrate the concepts described. In addition t..."
]
}
|
1004.3051
|
2952569022
|
In the highway problem, we are given an n-edge line graph (the highway), and a set of paths (the drivers), each one with its own budget. For a given assignment of edge weights (the tolls), the highway owner collects from each driver the weight of the associated path, when it does not exceed the budget of the driver, and zero otherwise. The goal is choosing weights so as to maximize the profit. A lot of research has been devoted to this apparently simple problem. The highway problem was shown to be strongly NP-hard only recently [Elbassioni,Raman,Ray-'09]. The best-known approximation is O( n n) [Gamzu,Segev-'10], which improves on the previous-best O( n) approximation [Balcan,Blum-'06]. In this paper we present a PTAS for the highway problem, hence closing the complexity status of the problem. Our result is based on a novel randomized dissection approach, which has some points in common with Arora's quadtree dissection for Euclidean network design [Arora-'98]. The basic idea is enclosing the highway in a bounding path, such that both the size of the bounding path and the position of the highway in it are random variables. Then we consider a recursive O(1)-ary dissection of the bounding path, in subpaths of uniform optimal weight. Since the optimal weights are unknown, we construct the dissection in a bottom-up fashion via dynamic programming, while computing the approximate solution at the same time. Our algorithm can be easily derandomized. We demonstrate the versatility of our technique by presenting PTASs for two variants of the highway problem: the tollbooth problem with a constant number of leaves and the maximum-feasibility subsystem problem on interval matrices. In both cases the previous best approximation factors are polylogarithmic [Gamzu,Segev-'10,Elbassioni,Raman,Ray,Sitters-'09].
|
There are better approximation results, all based on dynamic programming, for a number of special cases. In @cite_10 a constant approximation is given for the case that all the paths have roughly the same length. An FPTAS is described by Hartline and Koltun @cite_18 for the case that the highway has constant length (i.e., @math ). This was generalized to the case of constant-length paths in @cite_12 . In @cite_12 the authors also present an FPTAS for the case that budgets are upper bounded by a constant. An FPTAS is also known @cite_10 @cite_8 for the case that paths induce a laminar family In a laminar family of paths, two paths which intersect are contained one in the other. .
|
{
"cite_N": [
"@cite_18",
"@cite_10",
"@cite_12",
"@cite_8"
],
"mid": [
"1489711815",
"1526218779",
"",
"2062453462"
],
"abstract": [
"We present efficient approximation algorithms for a number of problems that call for computing the prices that maximize the revenue of the seller on a set of items. Algorithms for such problems enable the design of auctions and related pricing mechanisms [3]. In light of the fact that the problems we address are APX-hard in general [5], we design near-linear and near-cubic time approximation schemes under the assumption that the number of distinct items for sale is constant.",
"Novel compositions for use as cell growth-promoting materials are made by the following novel process involving the steps of: (a) slowly contacting serum or plasma with sufficient chilled perchloric acid to reach a 0.1 to 0.25 final molar concentration of said perchloric acid in said serum or plasma, (b) at a temperature of -1 DEG C. to 15 DEG C., (c) under intensive mixing which is continued until a homogeneous suspension is obtained, (d) separating the resultant precipitate, which contains the growth-promoting substances, from the supernatant, (e) eluting said growth-promoting substances from said precipitate by first resuspending said precipitate in an aqueous alkaline or salt solution, and thereafter, (f) adjusting the pH to solubilize the growth-promoting substances from the insoluble proteins, (g) separating the supernatant, which contains the growth-promoting substances, from the insoluble, undesired precipitate, (h) exchanging the solvent in the supernatant for a physiological solution, and (i) sterilizing the resultant growth-promoting material.",
"",
"We deal with the problem of finding profit-maximizing prices for a finite number of distinct goods, assuming that of each good an unlimited number of copies is available, or that goods can be reproduced at no cost (e.g., digital goods). Consumers specify subsets of the goods and the maximum prices they are willing to pay. In the considered single-minded case every consumer is interested in precisely one such subset. If the goods are the edges of a graph and consumers are requesting to purchase paths in this graph, then we can think of the problem as pricing computer network connections or transportation links.We start by showing weak NP-hardness of the very restricted case in which the requested subsets are nested, i.e., contained inside each other or non-intersecting, thereby resolving the previously open question whether the problem remains NP-hard when the underlying graph is simply a line. Using a reduction inspired by this result we present an approximation preserving reduction that proves APX-hardness even for very sparse instances defined on general graphs, where the number of requests per edge is bounded by a constant B and no path is longer than some constant l. On the algorithmic side we first present an O(log l + log B)-approximation algorithm that (almost) matches the previously best known approximation guarantee in the general case, but is especially well suited for sparse problem instances. Using a new upper bounding technique we then give an O(l2)-approximation, which is the first algorithm for the general problem with an approximation ratio that does not depend on B."
]
}
|
1004.3051
|
2952569022
|
In the highway problem, we are given an n-edge line graph (the highway), and a set of paths (the drivers), each one with its own budget. For a given assignment of edge weights (the tolls), the highway owner collects from each driver the weight of the associated path, when it does not exceed the budget of the driver, and zero otherwise. The goal is choosing weights so as to maximize the profit. A lot of research has been devoted to this apparently simple problem. The highway problem was shown to be strongly NP-hard only recently [Elbassioni,Raman,Ray-'09]. The best-known approximation is O( n n) [Gamzu,Segev-'10], which improves on the previous-best O( n) approximation [Balcan,Blum-'06]. In this paper we present a PTAS for the highway problem, hence closing the complexity status of the problem. Our result is based on a novel randomized dissection approach, which has some points in common with Arora's quadtree dissection for Euclidean network design [Arora-'98]. The basic idea is enclosing the highway in a bounding path, such that both the size of the bounding path and the position of the highway in it are random variables. Then we consider a recursive O(1)-ary dissection of the bounding path, in subpaths of uniform optimal weight. Since the optimal weights are unknown, we construct the dissection in a bottom-up fashion via dynamic programming, while computing the approximate solution at the same time. Our algorithm can be easily derandomized. We demonstrate the versatility of our technique by presenting PTASs for two variants of the highway problem: the tollbooth problem with a constant number of leaves and the maximum-feasibility subsystem problem on interval matrices. In both cases the previous best approximation factors are polylogarithmic [Gamzu,Segev-'10,Elbassioni,Raman,Ray,Sitters-'09].
|
The highway and tollbooth problems belong to the family of prizing problems with single-minded customers and unlimited supply. Here we are given a set of customers: Each customer wants to buy a subset of items (), if its total prize does not exceed her budget. In the highway terminology, each driver is a subset of edges (rather than a path). For this problem a @math approximation is given in @cite_12 . This bound was refined in @cite_8 to @math , where @math denotes the maximum number of items in a bundle and @math the maximum number of bundles containing a given item. A @math approximation is given in @cite_10 . On the negative side, @cite_16 show that this problem is hard to approximate within @math , for some @math , assuming that @math for some @math .
|
{
"cite_N": [
"@cite_16",
"@cite_10",
"@cite_12",
"@cite_8"
],
"mid": [
"2010996873",
"1526218779",
"",
"2062453462"
],
"abstract": [
"We prove semi-logarithmic inapproximability for a maximization problem called unique coverage: given a collection of sets, find a subcollection that maximizes the number of elements covered exactly once. Specifically, we prove O(1 logσ(e)n) inapproximability assuming that NP n BPTIME(2ne) for some e > 0. We also prove O(1 log1 3-e n) inapproximability, for any e > 0, assuming that refuting random instances of 3SAT is hard on average; and prove O(1 log n) inapproximability under a plausible hypothesis concerning the hardness of another problem, balanced bipartite independent set. We establish matching upper bounds up to exponents, even for a more general (budgeted) setting, giving an Ω(1 log n)-approximation algorithm as well as an Ω(1 log B)-approximation algorithm when every set has at most B elements. We also show that our inapproximability results extend to envy-free pricing, an important problem in computational economics. We describe how the (budgeted) unique coverage problem, motivated by real-world applications, has close connections to other theoretical problems including max cut, maximum coverage, and radio broad-casting.",
"Novel compositions for use as cell growth-promoting materials are made by the following novel process involving the steps of: (a) slowly contacting serum or plasma with sufficient chilled perchloric acid to reach a 0.1 to 0.25 final molar concentration of said perchloric acid in said serum or plasma, (b) at a temperature of -1 DEG C. to 15 DEG C., (c) under intensive mixing which is continued until a homogeneous suspension is obtained, (d) separating the resultant precipitate, which contains the growth-promoting substances, from the supernatant, (e) eluting said growth-promoting substances from said precipitate by first resuspending said precipitate in an aqueous alkaline or salt solution, and thereafter, (f) adjusting the pH to solubilize the growth-promoting substances from the insoluble proteins, (g) separating the supernatant, which contains the growth-promoting substances, from the insoluble, undesired precipitate, (h) exchanging the solvent in the supernatant for a physiological solution, and (i) sterilizing the resultant growth-promoting material.",
"",
"We deal with the problem of finding profit-maximizing prices for a finite number of distinct goods, assuming that of each good an unlimited number of copies is available, or that goods can be reproduced at no cost (e.g., digital goods). Consumers specify subsets of the goods and the maximum prices they are willing to pay. In the considered single-minded case every consumer is interested in precisely one such subset. If the goods are the edges of a graph and consumers are requesting to purchase paths in this graph, then we can think of the problem as pricing computer network connections or transportation links.We start by showing weak NP-hardness of the very restricted case in which the requested subsets are nested, i.e., contained inside each other or non-intersecting, thereby resolving the previously open question whether the problem remains NP-hard when the underlying graph is simply a line. Using a reduction inspired by this result we present an approximation preserving reduction that proves APX-hardness even for very sparse instances defined on general graphs, where the number of requests per edge is bounded by a constant B and no path is longer than some constant l. On the algorithmic side we first present an O(log l + log B)-approximation algorithm that (almost) matches the previously best known approximation guarantee in the general case, but is especially well suited for sparse problem instances. Using a new upper bounding technique we then give an O(l2)-approximation, which is the first algorithm for the general problem with an approximation ratio that does not depend on B."
]
}
|
1004.3051
|
2952569022
|
In the highway problem, we are given an n-edge line graph (the highway), and a set of paths (the drivers), each one with its own budget. For a given assignment of edge weights (the tolls), the highway owner collects from each driver the weight of the associated path, when it does not exceed the budget of the driver, and zero otherwise. The goal is choosing weights so as to maximize the profit. A lot of research has been devoted to this apparently simple problem. The highway problem was shown to be strongly NP-hard only recently [Elbassioni,Raman,Ray-'09]. The best-known approximation is O( n n) [Gamzu,Segev-'10], which improves on the previous-best O( n) approximation [Balcan,Blum-'06]. In this paper we present a PTAS for the highway problem, hence closing the complexity status of the problem. Our result is based on a novel randomized dissection approach, which has some points in common with Arora's quadtree dissection for Euclidean network design [Arora-'98]. The basic idea is enclosing the highway in a bounding path, such that both the size of the bounding path and the position of the highway in it are random variables. Then we consider a recursive O(1)-ary dissection of the bounding path, in subpaths of uniform optimal weight. Since the optimal weights are unknown, we construct the dissection in a bottom-up fashion via dynamic programming, while computing the approximate solution at the same time. Our algorithm can be easily derandomized. We demonstrate the versatility of our technique by presenting PTASs for two variants of the highway problem: the tollbooth problem with a constant number of leaves and the maximum-feasibility subsystem problem on interval matrices. In both cases the previous best approximation factors are polylogarithmic [Gamzu,Segev-'10,Elbassioni,Raman,Ray,Sitters-'09].
|
The technique behind our PTAS resembles Arora's quadtree dissection for Euclidean network design @cite_14 . The basic idea there is enclosing the set of input points into a bounding box, then recursively partition it in a constant number of boxes. This dissection is then randomly shifted. On the resulting random dissection, one applies dynamic programming. We similarly enclose the highway in a bounding path, and partition the latter. Like in Arora's approach, the dissection is randomly shifted. Differently from that case and crucially for our analysis, the size of the bounding path is a random variable as well. Another major difference is that the dissection is not uniform with respect to input properties, but with respect to the optimal weights: for this reason the dissection is constructed in a bottom-up, rather than top-down, fashion via dynamic programming (while computing the approximate solution in parallel).
|
{
"cite_N": [
"@cite_14"
],
"mid": [
"2165142526"
],
"abstract": [
"We present a polynomial time approximation scheme for Euclidean TSP in fixed dimensions. For every fixed c > 1 and given any n nodes in R 2 , a randomized version of the scheme finds a (1 + 1 c )-approximation to the optimum traveling salesman tour in O(n (log n ) O(c) ) time. When the nodes are in R d , the running time increases to O(n (log n ) (O( d c)) d-1 ). For every fixed c, d the running time is n • poly(log n ), that is nearly linear in n . The algorithmm can be derandomized, but this increases the running time by a factor O(n d ). The previous best approximation algorithm for the problem (due to Christofides) achieves a 3 2-aproximation in polynomial time. We also give similar approximation schemes for some other NP-hard Euclidean problems: Minimum Steiner Tree, k -TSP, and k -MST. (The running times of the algorithm for k -TSP and k -MST involve an additional multiplicative factor k .) The previous best approximation algorithms for all these problems achieved a constant-factor approximation. We also give efficient approximation schemes for Euclidean Min-Cost Matching, a problem that can be solved exactly in polynomial time. All our algorithms also work, with almost no modification, when distance is measured using any geometric norm (such as e p for p ≥ 1 or other Minkowski norms). They also have simple parallel (i.e., NC) implementations."
]
}
|
1004.0085
|
1941202037
|
Recent studies in the field of human vision science suggest that the human responses to the stimuli on a visual display are non-deterministic. People may attend to different locations on the same visual input at the same time. Based on this knowledge, we propose a new stochastic model of visual attention by introducing a dynamic Bayesian network to predict the likelihood of where humans typically focus on a video scene. The proposed model is composed of a dynamic Bayesian network with 4 layers. Our model provides a framework that simulates and combines the visual saliency response and the cognitive state of a person to estimate the most probable attended regions. Sample-based inference with Markov chain Monte-Carlo based particle filter and stream processing with multi-core processors enable us to estimate human visual attention in near real time. Experimental results have demonstrated that our model performs significantly better in predicting human visual attention compared to the previous deterministic models.
|
Several previous researches focused on modeling of human visual attention by using some kind of probabilistic techniques or concepts. Itti and Baldi @cite_7 investigated a Bayesian approach to detecting surprising events in video signals. Their approach models a surprise by Kullback-Leibler divergence between the prior and posterior distributions of fundamental features. Avraham and Lindenbaum @cite_25 utilized a graphical model approximation to extend their static saliency model based on self similarities. Boccignone @cite_6 introduced a nonparametric Bayesian framework to achieve object-based visual attention. Gao, Mahadevan and Vasconcelos @cite_0 @cite_16 developed a decision-theoretic approach attention model for object detection.
|
{
"cite_N": [
"@cite_7",
"@cite_6",
"@cite_0",
"@cite_16",
"@cite_25"
],
"mid": [
"2170610624",
"2113955871",
"2144207571",
"2164720308",
"2103357101"
],
"abstract": [
"Primates demonstrate unparalleled ability at rapidly orienting towards important events in complex dynamic environments. During rapid guidance of attention and gaze towards potential objects of interest or threats, often there is no time for detailed visual analysis. Thus, heuristic computations are necessary to locate the most interesting events in quasi real-time. We present a new theory of sensory surprise, which provides a principled and computable shortcut to important information. We develop a model that computes instantaneous low-level surprise at every location in video streams. The algorithm significantly correlates with eye movements of two humans watching complex video clips, including television programs (17,936 frames, 2,152 saccadic gaze shifts). The system allows more sophisticated and time-consuming image analysis to be efficiently focused onto the most surprising subsets of the incoming data.",
"We address the problem of object-based visual attention from a Bayesian standpoint. We contend with the issue of joint segmentation and saliency computation suitable to provide a sound basis for dealing with higher level information related to objects present in dynamic scene. To this end we propose a framework relying on nonparametric Bayesian techniques, namely variational inference on a mixture of Dirichlet processes.",
"A decision-theoretic formulation of visual saliency, first proposed for top-down processing (object recognition) (Gao & Vasconcelos, 2005a ), is extended to the problem of bottom-up saliency. Under this formulation, optimality is defined in the minimum probability of error sense, under a constraint of computational parsimony. The saliency of the visual features at a given location of the visual field is defined as the power of those features to discriminate between the stimulus at the location and a null hypothesis. For bottom-up saliency, this is the set of visual features that surround the location under consideration. Discrimination is defined in an information-theoretic sense and the optimal saliency detector derived for a class of stimuli that complies with known statistical properties of natural images. It is shown that under the assumption that saliency is driven by linear filtering, the optimal detector consists of what is usually referred to as the standard architecture of V1: a cascade of linear filtering, divisive normalization, rectification, and spatial pooling. The optimal detector is also shown to replicate the fundamental properties of the psychophysics of saliency: stimulus pop-out, saliency asymmetries for stimulus presence versus absence, disregard of feature conjunctions, and Weber's law. Finally, it is shown that the optimal saliency architecture can be applied to the solution of generic inference problems. In particular, for the class of stimuli studied, it performs the three fundamental operations of statistical inference: assessment of probabilities, implementation of Bayes decision rule, and feature selection.",
"A spatiotemporal saliency algorithm based on a center-surround framework is proposed. The algorithm is inspired by biological mechanisms of motion-based perceptual grouping and extends a discriminant formulation of center-surround saliency previously proposed for static imagery. Under this formulation, the saliency of a location is equated to the power of a predefined set of features to discriminate between the visual stimuli in a center and a surround window, centered at that location. The features are spatiotemporal video patches and are modeled as dynamic textures, to achieve a principled joint characterization of the spatial and temporal components of saliency. The combination of discriminant center-surround saliency with the modeling power of dynamic textures yields a robust, versatile, and fully unsupervised spatiotemporal saliency algorithm, applicable to scenes with highly dynamic backgrounds and moving cameras. The related problem of background subtraction is treated as the complement of saliency detection, by classifying nonsalient (with respect to appearance and motion dynamics) points in the visual field as background. The algorithm is tested for background subtraction on challenging sequences, and shown to substantially outperform various state-of-the-art techniques. Quantitatively, its average error rate is almost half that of the closest competitor.",
"Computer vision attention processes assign variable-hypothesized importance to different parts of the visual input and direct the allocation of computational resources. This nonuniform allocation might help accelerate the image analysis process. This paper proposes a new bottom-up attention mechanism. Rather than taking the traditional approach, which tries to model human attention, we propose a validated stochastic model to estimate the probability that an image part is of interest. We refer to this probability as saliency and thus specify saliency in a mathematically well-defined sense. The model quantifies several intuitive observations, such as the greater likelihood of correspondence between visually similar image regions and the likelihood that only a few of interesting objects will be present in the scene. The latter observation, which implies that such objects are (relaxed) global exceptions, replaces the traditional preference for local contrast. The algorithm starts with a rough preattentive segmentation and then uses a graphical model approximation to efficiently reveal which segments are more likely to be of interest. Experiments on natural scenes containing a variety of objects demonstrate the proposed method and show its advantages over previous approaches."
]
}
|
1004.0930
|
2952766239
|
This paper presents a set of exploits an adversary can use to continuously spy on most BitTorrent users of the Internet from a single machine and for a long period of time. Using these exploits for a period of 103 days, we collected 148 million IPs downloading 2 billion copies of contents. We identify the IP address of the content providers for 70 of the BitTorrent contents we spied on. We show that a few content providers inject most contents into BitTorrent and that those content providers are located in foreign data centers. We also show that an adversary can compromise the privacy of any peer in BitTorrent and identify the big downloaders that we define as the peers who subscribe to a large number of contents. This infringement on users' privacy poses a significant impediment to the legal adoption of BitTorrent.
|
Finally, @cite_0 is the work that is the closest to ours in scale however, they used an infrastructure of @math machines to collect @math million IP addresses within a @math hours window. In comparison, our customized measurement system used @math machine to collect around @math million IP addresses within the same time window, making it about @math times more efficient. In addition, that we performed our measurement from a single machine demonstrates that virtually can spy on BitTorrent users, which is a serious privacy issue.
|
{
"cite_N": [
"@cite_0"
],
"mid": [
"2146539853"
],
"abstract": [
"BitTorrent is the most successful open Internet application for content distribution. Despite its importance, both in terms of its footprint in the Internet and the influence it has on emerging P2P applications, the BitTorrent Ecosystem is only partially understood. We seek to provide a nearly complete picture of the entire public BitTorrent Ecosystem. To this end, we crawl five of the most popular torrent-discovery sites over a ine-month period, identifying all of 4.6 million and 38,996 trackers that the sites reference. We also develop a high-performance tracker crawler, and over a narrow window of 12 hours, crawl essentially all of the public Ecosystem's trackers, obtaining peer lists for all referenced torrents. Complementing the torrent-discovery site and tracker crawling, we further crawl Azureus and Mainline DHTs for a random sample of torrents. Our resulting measurement data are more than an order of magnitude larger (in terms of number of torrents, trackers, or peers) than any earlier study. Using this extensive data set, we study in-depth the Ecosystem's torrent-discovery, tracker, peer, user behavior, and content landscapes. For peer statistics, the analysis is based on one typical snapshot obtained over 12 hours. We further analyze the fragility of the Ecosystem upon the removal of its most important tracker service."
]
}
|
1004.0027
|
1653106757
|
Lattices are important as models for the node locations in wireless networks for two main reasons: (1) When network designers have control over the placement of the nodes, they often prefer a regular arrangement in a lattice for coverage and interference reasons. (2) If nodes are randomly distributed or mobile, good channel access schemes ensure that concurrent transmitters are regularly spaced, hence the locations of the transmitting nodes are well approximated by a lattice. In this paper, we introduce general interference bounding techniques that permit the derivation of tight closed-form upper and lower bounds for all lattice networks, and we present and analyze optimum or near-optimum channel access schemes for one-dimensional, square, and triangular lattices.
|
While a growing body of work studies interference in random networks (see, , @cite_2 @cite_4 and references therein), only few papers have addressed the issue of interference in lattice networks. In @cite_8 , bounds on the interference in triangular networks were derived using a relatively crude upper bound on the Riemann zeta function that is within 25 of @math to @math . We will derive a much tighter bound that is within 1.3 TDMA scheduling scheme for square lattices that is optimum for the case where the density of concurrent transmitters is @math is suggested in @cite_5 . Here, we provide near-optimum scheduling schemes for any density @math , @math . The interference distribution in one-dimensional networks with Rayleigh fading is analyzed in @cite_0 for the case where all nodes transmit, and @cite_1 derives outage results an throughput-optimum TDMA schedulers for the same type of network. Finally, the single-hop throughput for two-dimensional lattice networks with Rayleigh fading is approximated in @cite_6 . For non-fading channels, @cite_7 provides throughput results for general TDMA schemes in two-dimensional lattice networks. The interference is expressed using complicated infinite double sums (that are evaluated numerically), for which we will present tight bounds.
|
{
"cite_N": [
"@cite_4",
"@cite_7",
"@cite_8",
"@cite_1",
"@cite_6",
"@cite_0",
"@cite_2",
"@cite_5"
],
"mid": [
"2145873277",
"2062757421",
"",
"2012912688",
"2124713473",
"2042164227",
"",
"2167254161"
],
"abstract": [
"Wireless networks are fundamentally limited by the intensity of the received signals and by their interference. Since both of these quantities depend on the spatial location of the nodes, mathematical techniques have been developed in the last decade to provide communication-theoretic results accounting for the networks geometrical configuration. Often, the location of the nodes in the network can be modeled as random, following for example a Poisson point process. In this case, different techniques based on stochastic geometry and the theory of random geometric graphs -including point process theory, percolation theory, and probabilistic combinatorics-have led to results on the connectivity, the capacity, the outage probability, and other fundamental limits of wireless networks. This tutorial article surveys some of these techniques, discusses their application to model wireless networks, and presents some of the main results that have appeared in the literature. It also serves as an introduction to the field for the other papers in this special issue.",
"The throughput of large wireless networks with regular topologies is analyzed under two medium-access control schemes: synchronous array method (SAM) and slotted ALOHA. The regular topologies considered are square, hexagon, and triangle. Both nonfading channels and Rayleigh fading channels are examined. Furthermore, both omnidirectional antennas and directional antennas are considered. Our analysis shows that the SAM leads to a much higher network throughput than the slotted ALOHA. The network throughput in this paper is measured in either bits-hops per second per Hertz per node or bits-meters per second per Hertz per node. The exact connection between the two measures is shown for each topology. With these two fundamental units, the network throughput shown in this paper can serve as a reliable benchmark for future works on network throughput of large networks.",
"",
"Outage probabilities and single-hop throughput are two important performance metrics that have been evaluated for certain specific types of wireless networks. However, there is a lack of comprehensive results for larger classes of networks, and there is no systematic approach that permits the convenient comparison of the performance of networks with different geometries and levels of randomness. The uncertainty cube is introduced to categorize the uncertainty present in a network. The three axes of the cube represent the three main potential sources of uncertainty in interference-limited networks: the node distribution, the channel gains (fading), and the channel access scheme (set of transmitting nodes). For the performance analysis, a new parameter, the so- called spatial contention, is defined. It measures the slope of the outage probability in an ALOHA network as a function of the transmit probability p at p = 0. Outage is defined as the event that the signal-to-interference ratio (SIR) is below a certain threshold in a given time slot. It is shown that the spatial contention is sufficient to characterize outage and throughput in large classes of wireless networks, corresponding to different positions on the uncertainty cube. Existing results are placed in this framework, and new ones are derived. Further, interpreting the outage probability as the SIR distribution, the ergodic capacity of unit-distance links is determined and compared to the throughput achievable for fixed (yet optimized) transmission rates.",
"We present closed-form expressions of the average link throughput for sensor networks with a slotted ALOHA MAC protocol in Rayleigh fading channels. We compare networks with three regular topologies in terms of throughput, transmit efficiency, and transport capacity. In particular, for square lattice networks, we present a sensitivity analysis of the maximum throughput and the optimum transmit probability with respect to the signal-to-interference ratio threshold. For random networks with nodes distributed according to a two-dimensional Poisson point process, the average throughput is analytically characterized and numerically evaluated. It turns out that although regular networks have an only slightly higher average link throughput than random networks for the same link distance, regular topologies have a significant benefit when the end-to-end throughput in multihop connections is considered.",
"This paper deals with the distribution of cumulated instantaneous interference power in a Rayleigh fading channel for an infinite number of interfering stations, where each station transmits with a certain probability, independently of all others. If all distances are known, a necessary and sufficient condition is given for the corresponding distribution to be nondefective. Explicit formulae of density and distribution functions are obtained in the interesting special case that interfering stations are located on a linear grid. Moreover, the Laplace transform of cumulated power is investigated when the positions of stations follow a one- or two-dimensional Poisson process. It turns out that the corresponding distribution is defective for the two-dimensional models.",
"",
"For wireless ad hoc networks with stationary and deterministically placed nodes, finding the optimal placement of the nodes is an interesting and challenging problem, especially under energy and QoS constraints. We study and compare the performance of several networks with regular topologies utilizing a Rayleigh fading link model. For nearest neighbor and shortest path routing, analytical expressions of the path efficiency, delay, and energy consumption for a given end-to-end reception probability are derived. For the interference analysis, the maximum throughput and optimum transmit probability are determined, and a simple MAC scheme is compared with an optimum scheduler, yielding lower and upper performance bounds"
]
}
|
1003.5320
|
1531055819
|
Fast evolution of Internet technologies has led to an explosive growth of video data available in the public domain and created unprecedented challenges in the analysis, organization, management, and control of such content. The problems encountered in video analysis such as identifying a video in a large database (e.g. detecting pirated content in YouTube), putting together video fragments, finding similarities and common ancestry between different versions of a video, have analogous counterpart problems in genetic research and analysis of DNA and protein sequences. In this paper, we exploit the analogy between genetic sequences and videos and propose an approach to video analysis motivated by genomic research. Representing video information as video DNA sequences and applying bioinformatic algorithms allows to search, match, and compare videos in large-scale databases. We show an application for content-based metadata mapping between versions of annotated video.
|
The problem of metadata mapping addressed in this paper is intimately related to and search in video @cite_12 @cite_21 . There, one tries to find copies of a video that has undergone modifications (whether intentional or not) that potentially make it very different visually from the original. This problem should be distinguished from @cite_4 @cite_7 @cite_30 , where the similarity criterion is semantic. Broadly speaking, copy detection problems boil down to (finding a video invariant to a certain class of transformations) and action recognition are problems of (recognizing a certain class of behaviors in video). To illustrate the difference, imagine three video sequences: a movie quality version of Star Wars, the same version broadcast on TV with ad insertion and captured off screen with a camcorder, and the lighsabre fight scene reenacted by amateur actors. The purpose of copy detection is to say that the first and the second video sequences are similar; action recognition, on the other hand, should find similarity between the second and third videos.
|
{
"cite_N": [
"@cite_30",
"@cite_4",
"@cite_7",
"@cite_21",
"@cite_12"
],
"mid": [
"2142194269",
"2142261432",
"2026418062",
"2248282706",
"2018369373"
],
"abstract": [
"The aim of this paper is to address recognition of natural human actions in diverse and realistic video settings. This challenging but important subject has mostly been ignored in the past due to several problems one of which is the lack of realistic and annotated video datasets. Our first contribution is to address this limitation and to investigate the use of movie scripts for automatic annotation of human actions in videos. We evaluate alternative methods for action retrieval from scripts and show benefits of a text-based classifier. Using the retrieved action samples for visual learning, we next turn to the problem of action classification in video. We present a new method for video classification that builds upon and extends several recent ideas including local space-time features, space-time pyramids and multi-channel non-linear SVMs. The method is shown to improve state-of-the-art results on the standard KTH action dataset by achieving 91.8 accuracy. Given the inherent problem of noisy labels in automatic annotation, we particularly investigate and show high tolerance of our method to annotation errors in the training set. We finally apply the method to learning and classifying challenging action classes in movies and show promising results.",
"Real-world action recognition applications require the development of systems which are fast, can handle a large variety of actions without a priori knowledge of the type of actions, need a minimal number of parameters, and necessitate as short as possible learning stage. In this paper, we suggest such an approach. We regard dynamic activities as long-term temporal objects, which are characterized by spatio-temporal features at multiple temporal scales. Based on this, we design a simple statistical distance measure between video sequences which captures the similarities in their behavioral content. This measure is nonparametric and can thus handle a wide range of complex dynamic actions. Having a behavior-based distance measure between sequences, we use it for a variety of tasks, including: video indexing, temporal segmentation, and action-based video clustering. These tasks are performed without prior knowledge of the types of actions, their models, or their temporal extents",
"We address the problem of detecting irregularities in visual data, e.g., detecting suspicious behaviors in video sequences, or identifying salient patterns in images. The term \"irregular\" depends on the context in which the \"regular\" or \"valid\" are defined. Yet, it is not realistic to expect explicit definition of all possible valid configurations for a given context. We pose the problem of determining the validity of visual data as a process of constructing a puzzle: We try to compose a new observed image region or a new video segment (\"the query\") using chunks of data (\"pieces of puzzle\") extracted from previous visual examples (\"the database\"). Regions in the observed data which can be composed using large contiguous chunks of data from the database are considered very likely, whereas regions in the observed data which cannot be composed from the database (or can be composed, but only using small fragmented pieces) are regarded as unlikely suspicious. The problem is posed as an inference process in a probabilistic graphical model. We show applications of this approach to identifying saliency in images and video, for detecting suspicious behaviors and for automatic visual inspection for quality assurance.",
"A video copy detection system is a content-based search engine [1]. It aims at deciding whether a query video segment is a copy of a video from the indexed dataset or not. A copy may be distorted in various ways. If the system finds a matching video segment, it returns the name of the database video and the time stamp where the query was copied from. Fig. 1 illustrates the video copyright detection system we have developed for the TRECVID 2008 evaluation campaign. The components of this system are detailed in Section 2. Most of them are derived from the state-of-the-art image search engine introduced in [2]. It builds upon the bag-of-features image search system proposed in [3], and provides a more precise representation by adding 1) a Hamming embedding and 2) weak geometric consistency constraints. The HE provides binary signatures that refine the visual word based matching. WGC filters matching descriptors that are not consistent in terms of angle and scale. HE and WGC are integrated within an inverted file and are efficiently exploited for all indexed frames, even for a very large dataset. In our best runs, we have indexed 2 million keyframes, represented by 800 million local descriptors. We give some conclusions drawn from our experiments in Section 3. Finally, in section 4 we briefly present our run for the high-level feature detection task.",
"This paper presents a comparative study of methods for video copy detection. Different state-of-the-art techniques, using various kinds of descriptors and voting functions, are described: global video descriptors, based on spatial and temporal features; local descriptors based on spatial, temporal as well as spatio-temporal information. Robust voting functions is adapted to these techniques to enhance their performance and to compare them. Then, a dedicated framework for evaluating these systems is proposed. All the techniques are tested and compared within the same framework, by evaluating their robustness under single and mixed image transformations, as well as for different lengths of video segments. We discuss the performance of each approach according to the transformations and the applications considered. Local methods demonstrate their superior performance over the global ones, when detecting video copies subjected to various transformations."
]
}
|
1003.5320
|
1531055819
|
Fast evolution of Internet technologies has led to an explosive growth of video data available in the public domain and created unprecedented challenges in the analysis, organization, management, and control of such content. The problems encountered in video analysis such as identifying a video in a large database (e.g. detecting pirated content in YouTube), putting together video fragments, finding similarities and common ancestry between different versions of a video, have analogous counterpart problems in genetic research and analysis of DNA and protein sequences. In this paper, we exploit the analogy between genetic sequences and videos and propose an approach to video analysis motivated by genomic research. Representing video information as video DNA sequences and applying bioinformatic algorithms allows to search, match, and compare videos in large-scale databases. We show an application for content-based metadata mapping between versions of annotated video.
|
One of the cornerstone problems in content-based copy detection and search is the creation of a video representation that would allow to compare and match videos across versions. Different representations based on mosaic @cite_28 , shot boundaries @cite_1 , motion, color, and spatio-temporal intensity distribution @cite_29 , color histograms @cite_15 , and ordinal measure @cite_22 , were proposed. When considering large variability of versions due to post-production modifications, methods based on spatial @cite_17 @cite_13 @cite_25 and spatio-temporal @cite_23 points of interest and local descriptors were shown to be advantageous @cite_27 . In addition, these methods proved to be very efficient in image search in very large databases @cite_16 @cite_20 . More recently, Willems @cite_18 proposed feature-based spatio-temporal video descriptors combining both visual information of single video frames as well as the temporal relations between subsequent frames.
|
{
"cite_N": [
"@cite_18",
"@cite_22",
"@cite_28",
"@cite_29",
"@cite_1",
"@cite_27",
"@cite_23",
"@cite_15",
"@cite_16",
"@cite_13",
"@cite_25",
"@cite_20",
"@cite_17"
],
"mid": [
"1980997311",
"2111633771",
"1967677181",
"2021297101",
"160247648",
"1525526046",
"2020163092",
"2073054597",
"2131846894",
"2124404372",
"1677409904",
"2172232203",
"2151103935"
],
"abstract": [
"n this paper, we present a new method for robust content-based video copy detection based on local spatio-temporal features. As we show by experimental validation, the use of local spatio-temporal features instead of purely spatial ones brings additional robustness and discriminativity. Efficient operation is ensured by using the new spatio-temporal features proposed in [20]. To cope with the high-dimensionality of the resulting descriptors, these features are incorporated in a disk-based index and query system based on p-stable locality sensitive hashing. The system is applied to the task of video footage reuse detection in news broadcasts. Results are reported on 88 hours of news broadcast data from the TRECVID2006 dataset.",
"We propose a video signature based on an ordinal measure of resampled video frames, which is robust to changing compression formats, compression ratios, frame sizes and frame rates. For effective localization of a short query video clip in a long target video through the proposed video signature, we developed a coarse-to-fine signature comparison scheme. In the coarse searching step, roughly matched positions are determined based on sequence shape similarity, while in the fine searching step, dynamic programming is applied to handle similarity matching in the case of losing frames, and temporal editing processes are employed on the target video. Experiments show that the proposed video signature has good robustness and uniqueness, which are the two essential properties of video signatures.",
"Abstract Recently, there has been a growing interest in the use of mosaic images to represent the information contained in video sequences. This paper systematically investigates how to go beyond thinking of the mosaic simply as a visualization device, but rather as a basis for an efficient and complete representation of video sequences. We describe two different types of mosaics called the static and the dynamic mosaics that are suitable for different needs and scenarios. These two types of mosaics are unified and generalized in a mosaic representation called the temporal pyramid . To handle sequences containing large variations in image resolution, we develop a multiresolution mosaic . We discuss a series of increasingly complex alignment transformations (ranging from 2D to 3D and layers) for making the mosaics. We describe techniques for the basic elements of the mosaic construction process, namely sequence alignment , sequence integration into a mosaic image, and residual analysis to represent information not captured by the mosaic image. We describe several powerful video applications of mosaic representations including video compression, video enhancement, enhanced visualization , and other applications in video indexing, search , and manipulation .",
"Video copy detection is a complementary approach to watermarking. As opposed to watermarking, which relies on inserting a distinct pattern into the video stream, video copy detection techniques match content-based signatures to detect copies of video. Existing typical content-based copy detection schemes have relied on image matching. This paper proposes two new sequence-matching techniques for copy detection and compares the performance with one of the existing techniques. Motion, intensity and color-based signatures are compared in the context of copy detection. Results are reported on detecting copies of movie clips.© (2001) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.",
"",
"In many image or video retrieval systems, the search for similar objects in the database includes a spatial access method to a multidimensional feature space. This step is generally considered as a problem independent of the features and the similarity type. The well known multidimensional nearest neighbor search has also been widely studied by the database community as a generic method. We propose a novel strategy dedicated to pseudo-invariant features retrieval and more specifically applied to content based copy identification. The range of a query is computed during the search according to deviation statistics between original and observed features. Furthermore, this approximate search range is directly mapped onto a Hilbert space-filling curve, allowing an efficient access to the database. Experimental results give excellent response times for very large databases both on synthetic and real data. This work is used in a TV monitoring system including more than 13000 hours of video in the reference database.",
"Local image features or interest points provide compact and abstract representations of patterns in an image. In this paper, we extend the notion of spatial interest points into the spatio-temporal domain and show how the resulting features often reflect interesting events that can be used for a compact representation of video data as well as for interpretation of spatio-temporal events. To detect spatio-temporal events, we build on the idea of the Harris and Forstner interest point operators and detect local structures in space-time where the image values have significant local variations in both space and time. We estimate the spatio-temporal extents of the detected events by maximizing a normalized spatio-temporal Laplacian operator over spatial and temporal scales. To represent the detected events, we then compute local, spatio-temporal, scale-invariant N-jets and classify each event with respect to its jet descriptor. For the problem of human motion analysis, we illustrate how a video representation in terms of local space-time features allows for detection of walking people in scenes with occlusions and dynamic cluttered backgrounds.",
"Automated video matching and recognition has emerged in many applications in recent years. Computer recognition of TV commercials is one of interesting areas. One of critical challenges in automatic recognition of TV commercials is to generate a unique, robust and compact signature. As amount of video data is stored in large quantities, it is necessary to propose an efficient technique for video seeking and matching. In this paper, we present a binary signature based method BOC for TV matching. Experimental results on a real large commercial video database show that our novel approach delivers a significantly better performance comparing to the existing methods.",
"We describe an approach to object and scene retrieval which searches for and localizes all the occurrences of a user outlined object in a video. The object is represented by a set of viewpoint invariant region descriptors so that recognition can proceed successfully despite changes in viewpoint, illumination and partial occlusion. The temporal continuity of the video within a shot is used to track the regions in order to reject unstable regions and reduce the effects of noise in the descriptors. The analogy with text retrieval is in the implementation where matches on descriptors are pre-computed (using vector quantization), and inverted file systems and document rankings are used. The result is that retrieved is immediate, returning a ranked list of key frames shots in the manner of Google. The method is illustrated for matching in two full length feature films.",
"Abstract The wide-baseline stereo problem, i.e. the problem of establishing correspondences between a pair of images taken from different viewpoints is studied. A new set of image elements that are put into correspondence, the so called extremal regions , is introduced. Extremal regions possess highly desirable properties: the set is closed under (1) continuous (and thus projective) transformation of image coordinates and (2) monotonic transformation of image intensities. An efficient (near linear complexity) and practically fast detection algorithm (near frame rate) is presented for an affinely invariant stable subset of extremal regions, the maximally stable extremal regions (MSER). A new robust similarity measure for establishing tentative correspondences is proposed. The robustness ensures that invariants from multiple measurement regions (regions obtained by invariant constructions from extremal regions), some that are significantly larger (and hence discriminative) than the MSERs, may be used to establish tentative correspondences. The high utility of MSERs, multiple measurement regions and the robust metric is demonstrated in wide-baseline experiments on image pairs from both indoor and outdoor scenes. Significant change of scale (3.5×), illumination conditions, out-of-plane rotation, occlusion, locally anisotropic scale change and 3D translation of the viewpoint are all present in the test problems. Good estimates of epipolar geometry (average distance from corresponding points to the epipolar line below 0.09 of the inter-pixel distance) are obtained.",
"In this paper, we present a novel scale- and rotation-invariant interest point detector and descriptor, coined SURF (Speeded Up Robust Features). It approximates or even outperforms previously proposed schemes with respect to repeatability, distinctiveness, and robustness, yet can be computed and compared much faster. This is achieved by relying on integral images for image convolutions; by building on the strengths of the leading existing detectors and descriptors (in casu, using a Hessian matrix-based measure for the detector, and a distribution-based descriptor); and by simplifying these methods to the essential. This leads to a combination of novel detection, description, and matching steps. The paper presents experimental results on a standard evaluation set, as well as on imagery obtained in the context of a real-life object recognition application. Both show SURF's strong performance.",
"This paper proposes and compares two novel schemes for near duplicate image and video-shot detection. The first approach is based on global hierarchical colour histograms, using Locality Sensitive Hashing for fast retrieval. The second approach uses local feature descriptors (SIFT) and for retrieval exploits techniques used in the information retrieval community to compute approximate set intersections between documents using a min-Hash algorithm. The requirements for near-duplicate images vary according to the application, and we address two types of near duplicate definition: (i) being perceptually identical (e.g. up to noise, discretization effects, small photometric distortions etc); and (ii) being images of the same 3D scene (so allowing for viewpoint changes and partial occlusion). We define two shots to be near-duplicates if they share a large percentage of near-duplicate frames. We focus primarily on scalability to very large image and video databases, where fast query processing is necessary. Both methods are designed so that only a small amount of data need be stored for each image. In the case of near-duplicate shot detection it is shown that a weak approximation to histogram matching, consuming substantially less storage, is sufficient for good results. We demonstrate our methods on the TRECVID 2006 data set which contains approximately 165 hours of video (about 17.8M frames with 146K key frames), and also on feature films and pop videos.",
"This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance."
]
}
|
1003.5320
|
1531055819
|
Fast evolution of Internet technologies has led to an explosive growth of video data available in the public domain and created unprecedented challenges in the analysis, organization, management, and control of such content. The problems encountered in video analysis such as identifying a video in a large database (e.g. detecting pirated content in YouTube), putting together video fragments, finding similarities and common ancestry between different versions of a video, have analogous counterpart problems in genetic research and analysis of DNA and protein sequences. In this paper, we exploit the analogy between genetic sequences and videos and propose an approach to video analysis motivated by genomic research. Representing video information as video DNA sequences and applying bioinformatic algorithms allows to search, match, and compare videos in large-scale databases. We show an application for content-based metadata mapping between versions of annotated video.
|
One of the main disadvantages of existing video representations is a to invariance to video transformations. Usually, the representation is designed based on quantities and properties of video insensitive to typical transformations. For example, using gradient-based descriptors @cite_17 @cite_25 is known to be insensitive to illumination and color changes. Such a construction may often be unable to generalize to other classes of transformations, or result in a suboptimal tradeoff between invariance and discriminativity.
|
{
"cite_N": [
"@cite_25",
"@cite_17"
],
"mid": [
"1677409904",
"2151103935"
],
"abstract": [
"In this paper, we present a novel scale- and rotation-invariant interest point detector and descriptor, coined SURF (Speeded Up Robust Features). It approximates or even outperforms previously proposed schemes with respect to repeatability, distinctiveness, and robustness, yet can be computed and compared much faster. This is achieved by relying on integral images for image convolutions; by building on the strengths of the leading existing detectors and descriptors (in casu, using a Hessian matrix-based measure for the detector, and a distribution-based descriptor); and by simplifying these methods to the essential. This leads to a combination of novel detection, description, and matching steps. The paper presents experimental results on a standard evaluation set, as well as on imagery obtained in the context of a real-life object recognition application. Both show SURF's strong performance.",
"This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance."
]
}
|
1003.5320
|
1531055819
|
Fast evolution of Internet technologies has led to an explosive growth of video data available in the public domain and created unprecedented challenges in the analysis, organization, management, and control of such content. The problems encountered in video analysis such as identifying a video in a large database (e.g. detecting pirated content in YouTube), putting together video fragments, finding similarities and common ancestry between different versions of a video, have analogous counterpart problems in genetic research and analysis of DNA and protein sequences. In this paper, we exploit the analogy between genetic sequences and videos and propose an approach to video analysis motivated by genomic research. Representing video information as video DNA sequences and applying bioinformatic algorithms allows to search, match, and compare videos in large-scale databases. We show an application for content-based metadata mapping between versions of annotated video.
|
An alternative approach, adopted in this paper, is to the invariance from examples of video transformations. By simulating the post-production and editing process, we are able to produce pairs of video sequences that are supposed to be similar (different up to a transformations) and pairs of sequences from different videos supposed to be dissimilar. Such pairs are used as a training set for similarity preserving hashing and metric learning algorithms @cite_19 @cite_10 @cite_8 in order to create a metric between video sequences that achieves optimal invariance and discriminativity on the training set.
|
{
"cite_N": [
"@cite_19",
"@cite_10",
"@cite_8"
],
"mid": [
"",
"2162881463",
"2154956324"
],
"abstract": [
"",
"We introduce a method that enables scalable image search for learned metrics. Given pairwise similarity and dissimilarity constraints between some images, we learn a Mahalanobis distance function that captures the imagespsila underlying relationships well. To allow sub-linear time similarity search under the learned metric, we show how to encode the learned metric parameterization into randomized locality-sensitive hash functions. We further formulate an indirect solution that enables metric learning and hashing for vector spaces whose high dimensionality make it infeasible to learn an explicit weighting over the feature dimensions. We demonstrate the approach applied to a variety of image datasets. Our learned metrics improve accuracy relative to commonly-used metric baselines, while our hashing construction enables efficient indexing with learned distances and very large databases.",
"The Internet contains billions of images, freely available online. Methods for efficiently searching this incredibly rich resource are vital for a large number of applications. These include object recognition, computer graphics, personal photo collections, online image search tools. In this paper, our goal is to develop efficient image search and scene matching techniques that are not only fast, but also require very little memory, enabling their use on standard hardware or even on handheld devices. Our approach uses recently developed machine learning techniques to convert the Gist descriptor (a real valued vector that describes orientation energies at different scales and orientations within an image) to a compact binary code, with a few hundred bits per image. Using our scheme, it is possible to perform real-time searches with millions from the Internet using a single large PC and obtain recognition results comparable to the full descriptor. Using our codes on high quality labeled images from the LabelMe database gives surprisingly powerful recognition results using simple nearest neighbor techniques."
]
}
|
1003.3499
|
1764540108
|
Creating virtual models of real spaces and objects is cumbersome and time consuming. This paper focuses on the problem of geometric reconstruction from sparse data obtained from certain image-based modeling approaches. A number of elegant and simple-to-state problems arise concerning when the geometry can be reconstructed. We describe results and counterexamples, and list open problems.
|
There is a rich history of work on reconstructing polyhedra from partial descriptions. See Lucier @cite_23 for a survey. There is little work, however, on reconstructing polyhedra from sparse point-plane or point-normal data, let alone more complex metadata. Biedl discuss several polygon reconstruction problems based on point-normal data and related data @cite_25 . Their reconstruction results are limited to two dimensions.
|
{
"cite_N": [
"@cite_25",
"@cite_23"
],
"mid": [
"2136686792",
"2109556124"
],
"abstract": [
"A range-finding scanner can collect information about the shape of an (unknown) polygonal room in which it is placed. Suppose that a set of scanners returns not only a set of points, but also additional information, such as the normal to the plane when a scan beam detects a wall. We consider the problem of reconstructing the floor plan of a room from different types of scan data. In particular, we present algorithmic and hardness results for reconstructing two-dimensional polygons from points, point normal pairs, and visibility polygons. The polygons may have restrictions on topology (e.g., to be simply connected) or geometry (e.g., to be orthogonal). We show that this reconstruction problem is NP-hard in most models, but for some assumptions allows polynomial-time reconstruction algorithms which we describe.",
"This thesis covers work on two topics: unfolding polyhedra into the plane and reconstructing polyhedra from partial information. For each topic, we describe previous work in the area and present an array of new research and results. Our work on unfolding is motivated by the problem of characterizing precisely when overlaps will occur when a polyhedron is cut along edges and unfolded. By contrast to previous work, we begin by classifying overlaps according to a notion of locality. This classification enables us to focus upon particular types of overlaps, and use the results to construct examples of polyhedra with interesting unfolding properties. The research on unfolding is split into convex and non-convex cases. In the non-convex case, we construct a polyhedron for which every edge unfolding has an overlap, with fewer faces than all previously known examples. We also construct a non-convex polyhedron for which every edge unfolding has a particularly trivial type of overlap. In the convex case, we construct a series of example polyhedra for which every unfolding of various types has an overlap. These examples disprove some existing conjectures regarding algorithms to unfold convex polyhedra without overlaps. The work on reconstruction is centered around analyzing the computational complexity of a number of reconstruction questions. We consider two classes of reconstruction problems. The first problem is as follows: given a collection of edges in space, determine whether they can be rearranged by translation only to form a polygon or polyhedron. We consider variants of this problem by introducing restrictions like convexity, orthogonality, and non-degeneracy. All of these problems are NP-complete, though some are proved to be only weakly NP-complete. We then consider a second, more classical problem: given a collection of edges in space, determine whether they can be rearranged by translation and or rotation to form a polygon or polyhedron. This problem is NP-complete for orthogonal polygons, but polynomial algorithms exist for nonorthogonal polygons. For polyhedra, it is shown that if degeneracies are allowed then the problem is NP-hard, but the complexity is still unknown for non-degenerate polyhedra."
]
}
|
1003.3661
|
1489437768
|
Dereferencing a URI returns a representation of the current state of the resource identified by that URI. But, on the Web representations of prior states of a resource are also available, for example, as resource versions in Content Management Systems or archival resources in Web Archives such as the Internet Archive. This paper introduces a resource versioning mechanism that is fully based on HTTP and uses datetime as a global version indicator. The approach allows "follow your nose" style navigation both from the current time-generic resource to associated time-specific version resources as well as among version resources. The proposed versioning mechanism is congruent with the Architecture of the World Wide Web, and is based on the Memento framework that extends HTTP with transparent content negotiation in the datetime dimension. The paper shows how the versioning approach applies to Linked Data, and by means of a demonstrator built for DBpedia, it also illustrates how it can be used to conduct a time-series analysis across versions of Linked Data descriptions.
|
There is a relationship between the described work and efforts that research the problem of provenance of Linked Data, specifically those provenance aspects concerned with the time intervals in which specific data is valid. For example, @cite_7 , is concerned with provenance graphs that allow expressing such validity information, whereas @cite_8 focuses on applications to support preserving link integrity over time. Our proposal introduces a native HTTP approach that allows leveraging the results of these efforts at Web scale.
|
{
"cite_N": [
"@cite_7",
"@cite_8"
],
"mid": [
"2115373755",
"2128536206"
],
"abstract": [
"The openness of the Web and the ease to combine linked data from dierent sources creates new challenges. Systems that consume linked data must evaluate quality and trustworthiness of the data. A common approach for data quality assessment is the analysis of provenance information. For this reason, this paper discusses provenance of data on the Web and proposes a suitable provenance model. While traditional provenance research usually addresses the creation of data, our provenance model also represents data access, a dimension of provenance that is particularly relevant in the context of Web data. Based on our model we identify options to obtain provenance information and we raise open questions concerning the publication of provenance-related metadata for linked data on the Web.",
"The Linking Open Data (LOD) initiative has motivated numerous institutions to publish their data on the Web and to interlink them with those of other data sources. But since LOD sources are subject to change, links between resources can break and lead to processing errors in applications that consume linked data. The current practice is to ignore this problem and leave it to the applications what to do when broken links are detected. We believe, however, that LOD data sources should provide the highest possible degree of link integrity in order to relieve applications from this issue, similar to databases that provide mechanisms to preserve referential integrity in their data. As a possible solution, we propose DSNotify, an add-on for LOD sources that detects broken links and assists the data source in fixing them, e.g., when resources were moved to other Web locations."
]
}
|
1003.1507
|
1574334008
|
In a column-restricted covering integer program (CCIP), all the non-zero entries of any column of the constraint matrix are equal. Such programs capture capacitated versions of covering problems. In this paper, we study the approximability of CCIPs, in particular, their relation to the integrality gaps of the underlying 0,1-CIP. If the underlying 0,1-CIP has an integrality gap O(γ), and assuming that the integrality gap of the priority version of the 0,1-CIP is O(ω), we give a factor O(γ+ω) approximation algorithm for the CCIP. Priority versions of 0,1-CIPs (PCIPs) naturally capture quality of service type constraints in a covering problem. We investigate priority versions of the line (PLC) and the (rooted) tree cover (PTC) problems. Apart from being natural objects to study, these problems fall in a class of fundamental geometric covering problems. We bound the integrality of certain classes of this PCIP by a constant. Algorithmically, we give a polytime exact algorithm for PLC, show that the PTC problem is APX-hard, and give a factor 2-approximation algorithm for it.
|
There is a rich and long line of work ( @cite_9 @cite_13 @cite_11 @cite_15 @cite_0 ) on approximation algorithms for CIPs, of which we state the most relevant to our work. Assuming no upper bounds on the variables, Srinivasan @cite_15 gave a @math -approximation to the problem (where @math is the dilation as before). Later on, Kolliopoulos and Young @cite_19 obtained the same approximation factor, respecting the upper bounds. However, these algorithms didn't give any better results when special structure of the constraint matrix was known. On the hardness side, Trevisan @cite_10 showed that it is NP-hard to obtain a @math -approximation algorithm even for 0,1-CIPs.
|
{
"cite_N": [
"@cite_10",
"@cite_9",
"@cite_0",
"@cite_19",
"@cite_15",
"@cite_13",
"@cite_11"
],
"mid": [
"2001663593",
"2073127061",
"2125674105",
"1976432206",
"1993119087",
"2088188524",
"2161973620"
],
"abstract": [
"par>We prove some non-approximability results for restrictions of basic combinatorial optimization problems to instances of bounded “degreeror bounded “width.” Specifically: We prove that the Max 3SAT problem on instances where each variable occurs in at most B clauses, is hard to approximate to within a factor @math , unless @math . H stad [18] proved that the problem is approximable to within a factor @math in polynomial time, and that is hard to approximate to within a factor @math . Our result uses a new randomized reduction from general instances of Max 3SAT to bounded-occurrences instances. The randomized reduction applies to other Max SNP problems as well. We observe that the Set Cover problem on instances where each set has size at most B is hard to approximate to within a factor @math unless @math . The result follows from an appropriate setting of parameters in Feige's reduction [11]. This is essentially tight in light of the existence of @math -approximate algorithms [20, 23, 9] We present a new PCP construction, based on applying parallel repetition to the inner verifier,'' and we provide a tight analysis for it. Using the new construction, and some modifications to known reductions from PCP to Hitting Set, we prove that Hitting Set with sets of size B is hard to approximate to within a factor @math . The problem can be approximated to within a factor B [19], and it is the Vertex Cover problem for B =2. The relationship between hardness of approximation and set size seems to have not been explored before. We observe that the Independent Set problem on graphs having degree at most B is hard to approximate to within a factor @math , unless P = NP . This follows from a comination of results by Clementi and Trevisan [28] and Reingold, Vadhan and Wigderson [27]. It had been observed that the problem is hard to approximate to within a factor @math unless P = NP [1]. An algorithm achieving factor @math is also known [21, 2, 30, 16 .",
"We give a worst-case analysis for two greedy heuristics for the integer programming problem minimize cx , Ax (ge) b , 0 (le) x (le) u , x integer, where the entries in A, b , and c are all nonnegative. The first heuristic is for the case where the entries in A and b are integral, the second only assumes the rows are scaled so that the smallest nonzero entry is at least 1. In both cases we compare the ratio of the value of the greedy solution to that of the integer optimal. The error bound grows logarithmically in the maximum column sum of A for both heuristics.",
"The Lova´sz local lemma due to Erdods and Lova´sz (Infinite and Finite Sets, Colloq. Math. Soc. J. Bolyai 11, 1975, pp. 609-627) is a powerful tool in proving the existence of rare events. We present an extension of this lemma, which works well when the event to be shown to exist is a conjunction of individual events, each of which asserts that a random variable does not deviate much from its mean. As applications, we consider two classes of NP-hard integer programs: minimax and covering integer programs. A key technique, randomized rounding of linear relaxations, was developed by Raghavan and Thompson (Combinatorica, 7 (1987), pp. 365-374) to derive good approximation algorithms for such problems. We use our extension of the local lemma to prove that randomized rounding produces, with nonzero probability, much better feasible solutions than known before, if the constraint matrices of these integer programs are column-sparse (e.g., routing using short paths, problems on hypergraphs with small dimension degree). This complements certain well-known results from discrepancy theory. We also generalize the method of pessimistic estimators due to Raghavan (J. Comput. System Sci., 37 (1988), pp. 130-143), to obtain constructive (algorithmic) versions of our results for covering integer programs.",
"Given matrices A and B and vectors a, b, c and d, all with non-negative entries, we consider the problem of computing min c^Tx:[email protected]?Z\"+^n,Ax>=a,Bx==a) and multiplicity constraints (x=",
"Several important NP-hard combinatorial optimization problems can be posed as packing covering integer programs; the randomized rounding technique of Raghavan and Thompson is a powerful tool with which to approximate them well. We present one elementary unifying property of all these integer linear programs and use the FKG correlation inequality to derive an improved analysis of randomized rounding on them. This yields a pessimistic estimator, thus presenting deterministic polynomial-time algorithms for them with approximation guarantees that are significantly better than those known.",
"We propose a heuristic that delivers in @math steps a solution for the set covering problem the value of which does not exceed the maximum number of sets covering an element times the optimal value.",
"We build on the classical greedy sequential set cover algorithm, in the spirit of the primal-dual schema, to obtain simple parallel approximation algorithms for the set cover problem and its generalizations. Our algorithms use randomization, and our randomized voting lemmas may be of independent interest. Fast parallel approximation algorithms were known before for set cover, though not for any of its generalizations. >"
]
}
|
1003.1507
|
1574334008
|
In a column-restricted covering integer program (CCIP), all the non-zero entries of any column of the constraint matrix are equal. Such programs capture capacitated versions of covering problems. In this paper, we study the approximability of CCIPs, in particular, their relation to the integrality gaps of the underlying 0,1-CIP. If the underlying 0,1-CIP has an integrality gap O(γ), and assuming that the integrality gap of the priority version of the 0,1-CIP is O(ω), we give a factor O(γ+ω) approximation algorithm for the CCIP. Priority versions of 0,1-CIPs (PCIPs) naturally capture quality of service type constraints in a covering problem. We investigate priority versions of the line (PLC) and the (rooted) tree cover (PTC) problems. Apart from being natural objects to study, these problems fall in a class of fundamental geometric covering problems. We bound the integrality of certain classes of this PCIP by a constant. Algorithmically, we give a polytime exact algorithm for PLC, show that the PTC problem is APX-hard, and give a factor 2-approximation algorithm for it.
|
The most relevant work to this paper is that of Kolliopoulos @cite_22 . The author studies CCIPs which satisfy a rather strong assumption, called the no bottleneck assumption , that the supply of any column is smaller than the demand of any row. Kolliopoulos @cite_22 shows that if one is allowed to violate the upper bounds by a multiplicative constant, then the integrality gap of the CCIP is within a constant factor of that of the original 0,1-CIP Such a result is implicit in the paper; the author only states a @math integrality gap. . As the author notes such a violation is necessary; otherwise the CCIP has unbounded integrality gap. If one is not allowed to violated upper bounds, nothing better than the result of @cite_19 is known for the special case of CCIPs.
|
{
"cite_N": [
"@cite_19",
"@cite_22"
],
"mid": [
"1976432206",
"2093259407"
],
"abstract": [
"Given matrices A and B and vectors a, b, c and d, all with non-negative entries, we consider the problem of computing min c^Tx:[email protected]?Z\"+^n,Ax>=a,Bx==a) and multiplicity constraints (x=",
"In a covering integer program (CIP), we seek an n-vector x of nonnegative integers, which minimizes cT ċ x, subject to Ax ≥ b, where all entries of A, b, c are nonnegative. In their most general form, CIPs include also multiplicity constraints of the type x ≤ d, i.e., arbitrarily large integers are not acceptable in the solution. The multiplicity constraints incur a dichotomy with respect to approximation between (0,1)-CIPs whose matrix A contains only zeros and ones and the general case. Let m denote the number of rows of A. The well known O(log m) cost approximation with respect to the optimum of the linear relaxation is valid for general CIPs, but multiplicity constraints can be dealt with effectively only in the (0,1) case. In the general case, existing algorithms that match the integrality gap for the cost objective violate the multiplicity constraints by a multiplicative O(log m) factor. We make progress by defining column-restricted CIPs, a strict superclass of (0,1)-CIPs, and showing how to find for them integral solutions of cost O(log m) times the LP optimum while violating the multiplicity constraints by a multiplicative O(1) factor."
]
}
|
1003.1507
|
1574334008
|
In a column-restricted covering integer program (CCIP), all the non-zero entries of any column of the constraint matrix are equal. Such programs capture capacitated versions of covering problems. In this paper, we study the approximability of CCIPs, in particular, their relation to the integrality gaps of the underlying 0,1-CIP. If the underlying 0,1-CIP has an integrality gap O(γ), and assuming that the integrality gap of the priority version of the 0,1-CIP is O(ω), we give a factor O(γ+ω) approximation algorithm for the CCIP. Priority versions of 0,1-CIPs (PCIPs) naturally capture quality of service type constraints in a covering problem. We investigate priority versions of the line (PLC) and the (rooted) tree cover (PTC) problems. Apart from being natural objects to study, these problems fall in a class of fundamental geometric covering problems. We bound the integrality of certain classes of this PCIP by a constant. Algorithmically, we give a polytime exact algorithm for PLC, show that the PTC problem is APX-hard, and give a factor 2-approximation algorithm for it.
|
Our work on CCIPs parallels a large body of work on column-restricted packing integer programs (CPIPs). Assuming the no-bottleneck assumption , Kolliopoulos and Stein @cite_20 show that CPIPs can be approximated asymptotically as well as the corresponding 0,1-PIPs. @cite_18 subsequently improve the constants in the result from @cite_20 . These results imply constant factor approximations for the column-restricted tree packing problem under the no-bottleneck assumption. Without the no-bottleneck assumption, however, only polylogarithmic approximation is known for the problem @cite_17 .
|
{
"cite_N": [
"@cite_18",
"@cite_20",
"@cite_17"
],
"mid": [
"2170344325",
"2005509629",
"2103363383"
],
"abstract": [
"We consider requests for capacity in a given tree network T e (V, E) where each edge e of the tree has some integer capacity ue. Each request f is a node pair with an integer demand df and a profit wf which is obtained if the request is satisfied. The objective is to find a set of demands that can be feasibly routed in the tree and which provides a maximum profit. This generalizes well-known problems, including the knapsack and b-matching problems. When all demands are 1, we have the integer multicommodity flow problem. [1997] had shown that this problem is NP-hard and gave a 2-approximation algorithm for the cardinality case (all profits are 1) via a primal-dual algorithm. Our main result establishes that the integrality gap of the natural linear programming relaxation is at most 4 for the case of arbitrary profits. Our proof is based on coloring paths on trees and this has other applications for wavelength assignment in optical network routing. We then consider the problem with arbitrary demands. When the maximum demand dmax is at most the minimum edge capacity umin, we show that the integrality gap of the LP is at most 48. This result is obtained by showing that the integrality gap for the demand version of such a problem is at most 11.542 times that for the unit-demand case. We use techniques of Kolliopoulos and Stein [2004, 2001] to obtain this. We also obtain, via this method, improved algorithms for line and ring networks. Applications and connections to other combinatorial problems are discussed.",
"In a packing integer program, we are given a matrix @math and column vectors @math with nonnegative entries. We seek a vector @math of nonnegative integers, which maximizes @math subject to @math The edge and vertex-disjoint path problems together with their unsplittable flow generalization are NP-hard problems with a multitude of applications in areas such as routing, scheduling and bin packing. These two categories of problems are known to be conceptually related, but this connection has largely been ignored in terms of approximation algorithms. We explore the topic of approximating disjoint-path problems using polynomial-size packing integer programs. Motivated by the disjoint paths applications, we introduce the study of a class of packing integer programs, called column-restricted. We develop improved approximation algorithms for column-restricted programs, a result that we believe is of independent interest. Additional approximation algorithms for disjoint-paths are presented that are simple to implement and achieve good performance when the input has a special structure.",
"We consider the unsplittable flow problem (UFP) and the closely related column-restricted packing integer programs (CPIPs). In UFP we are given an edge-capacitated graph G = (V ,E ) and k request pairs R 1 , ..., R k , where each R i consists of a source-destination pair (s i ,t i ), a demand d i and a weight w i . The goal is to find a maximum weight subset of requests that can be routed unsplittably in G . Most previous work on UFP has focused on the no-bottleneck case in which the maximum demand of the requests is at most the smallest edge capacity. Inspired by the recent work of . [3] on UFP on a path without the above assumption, we consider UFP on paths as well as trees. We give a simple O (logn ) approximation for UFP on trees when all weights are identical; this yields an O (log2 n ) approximation for the weighted case. These are the first non-trivial approximations for UFP on trees. We develop an LP relaxation for UFP on paths that has an integrality gap of O (log2 n ); previously there was no relaxation with o (n ) gap. We also consider UFP in general graphs and CPIPs without the no-bottleneck assumption and obtain new and useful results."
]
}
|
1003.1507
|
1574334008
|
In a column-restricted covering integer program (CCIP), all the non-zero entries of any column of the constraint matrix are equal. Such programs capture capacitated versions of covering problems. In this paper, we study the approximability of CCIPs, in particular, their relation to the integrality gaps of the underlying 0,1-CIP. If the underlying 0,1-CIP has an integrality gap O(γ), and assuming that the integrality gap of the priority version of the 0,1-CIP is O(ω), we give a factor O(γ+ω) approximation algorithm for the CCIP. Priority versions of 0,1-CIPs (PCIPs) naturally capture quality of service type constraints in a covering problem. We investigate priority versions of the line (PLC) and the (rooted) tree cover (PTC) problems. Apart from being natural objects to study, these problems fall in a class of fundamental geometric covering problems. We bound the integrality of certain classes of this PCIP by a constant. Algorithmically, we give a polytime exact algorithm for PLC, show that the PTC problem is APX-hard, and give a factor 2-approximation algorithm for it.
|
The only work on priority versions of covering problems that we are aware of is due to Charikar, Naor and Schieber @cite_8 who studied the priority Steiner tree and forest problems in the context of QoS management in a network multicasting application. present a @math -approximation algorithm for the problem, and @cite_2 later show that no efficient @math approximation algorithm can exist unless @math ( @math is the number of vertices).
|
{
"cite_N": [
"@cite_2",
"@cite_8"
],
"mid": [
"2166747457",
"2148965337"
],
"abstract": [
"Consider the following classical network design problem: a set of terminals T e ti wishes to send traffic to a root r in an n-node graph G e (V, E). Each terminal ti sends di units of traffic and enough bandwidth has to be allocated on the edges to permit this. However, bandwidth on an edge e can only be allocated in integral multiples of some base capacity ue and hence provisioning k × ue bandwidth on edge e incurs a cost of ⌈k⌉ times the cost of that edge. The objective is a minimum-cost feasible solution. This is one of many network design problems widely studied where the bandwidth allocation is governed by side constraints: edges can only allow a subset of cables to be purchased on them or certain quality-of-service requirements may have to be met. In this work, we show that this problem and, in fact, several basic problems in this general network design framework cannot be approximated better than Ω(log log n) unless NP ⊆ DTIME (nO(log log log n)), where vVv e n. In particular, we show that this inapproximability threshold holds for (i) the Priority-Steiner Tree problem, (ii) the (single-sink) Cost-Distance problem, and (iii) the single-sink version of an even more fundamental problem, Fixed Charge Network Flow. Our results provide a further breakthrough in the understanding of the level of complexity of network design problems. These are the first nonconstant hardness results known for all these problems.",
"We consider a network design problem, where applications require various levels of Quality-of-Service (QoS) while connections have limited performance. Suppose that a source needs to send a message to a heterogeneous set of receivers. The objective is to design a low-cost multicast tree from the source that would provide the QoS levels (e.g., bandwidth) requested by the receivers. We assume that the QoS level required on a link is the maximum among the QoS levels of the receivers that are connected to the source through the link. In accordance, we define the cost of a link to be a function of the QoS level that it provides. This definition of cost makes this optimization problem more general than the classical Steiner tree problem. We consider several variants of this problem all of which are proved to be NP-Hard. For the variant where QoS levels of a link can vary arbitrarily and the cost function is linear in its QoS level, we give a heuristic that achieves a multicast tree with cost at most a constant times the cost of an optimal multicast tree. The constant depends on the best constant approximation ratio of the classical Steiner tree problem. For the more general variant, where each link has a given QoS level and cost we present a heuristic that generates a multicast tree with cost O(min log r, k ) times the cost of an optimal tree, where r denotes the number of receivers, and k denotes the number of different levels of QoS required. We generalize this result to hold for the case of many multicast groups."
]
}
|
1003.1507
|
1574334008
|
In a column-restricted covering integer program (CCIP), all the non-zero entries of any column of the constraint matrix are equal. Such programs capture capacitated versions of covering problems. In this paper, we study the approximability of CCIPs, in particular, their relation to the integrality gaps of the underlying 0,1-CIP. If the underlying 0,1-CIP has an integrality gap O(γ), and assuming that the integrality gap of the priority version of the 0,1-CIP is O(ω), we give a factor O(γ+ω) approximation algorithm for the CCIP. Priority versions of 0,1-CIPs (PCIPs) naturally capture quality of service type constraints in a covering problem. We investigate priority versions of the line (PLC) and the (rooted) tree cover (PTC) problems. Apart from being natural objects to study, these problems fall in a class of fundamental geometric covering problems. We bound the integrality of certain classes of this PCIP by a constant. Algorithmically, we give a polytime exact algorithm for PLC, show that the PTC problem is APX-hard, and give a factor 2-approximation algorithm for it.
|
To the best of our knowledge, the column-restricted or priority versions of the line and tree cover problem have not been studied. The best known approximation algorithm known for both is the @math factor implied by the results of @cite_19 stated above. However, upon completion of our work, Nitish Korula @cite_23 pointed out to us that a @math -approximation for column-restricted line cover is implicit in a result of Bar- @cite_5 . We remark that their algorithm is not LP-based, although our general result on CCIPs is.
|
{
"cite_N": [
"@cite_19",
"@cite_5",
"@cite_23"
],
"mid": [
"1976432206",
"2003902046",
""
],
"abstract": [
"Given matrices A and B and vectors a, b, c and d, all with non-negative entries, we consider the problem of computing min c^Tx:[email protected]?Z\"+^n,Ax>=a,Bx==a) and multiplicity constraints (x=",
"We present a general framework for solving resource allocation and scheduling problems. Given a resource of fixed size, we present algorithms that approximate the maximum throughput or the minimum loss by a constant factor. Our approximation factors apply to many problems, among which are: (i) real-time scheduling of jobs on parallel machines, (ii) bandwidth allocation for sessions between two endpoints, (iii) general caching, (iv) dynamic storage allocation, and (v) bandwidth allocation on optical line and ring topologies. For some of these problems we provide the first constant factor approximation algorithm. Our algorithms are simple and efficient and are based on the local-ratio technique. We note that they can equivalently be interpreted within the primal-dual schema.",
""
]
}
|
1003.1879
|
2951414870
|
Block-transitive Steiner @math -designs form a central part of the study of highly symmetric combinatorial configurations at the interface of several disciplines, including group theory, geometry, combinatorics, coding and information theory, and cryptography. The main result of the paper settles an important open question: There exist no non-trivial examples with @math (or larger). The proof is based on the classification of the finite 3-homogeneous permutation groups, itself relying on the finite simple group classification.
|
The author @cite_10 @cite_13 recently confirmed the non-existence of block-transitive Steiner @math -designs , modulo two special cases that remain elusive.
|
{
"cite_N": [
"@cite_13",
"@cite_10"
],
"mid": [
"2060604410",
"1677235812"
],
"abstract": [
"This paper takes a significant step towards confirming a long-standing and far-reaching conjecture of Peter J. Cameron and Cheryl E. Praeger. They conjectured in 1993 that there are no non-trivial block-transitive 6-designs. We prove that the Cameron-Praeger conjecture is true for the important case of non-trivial Steiner 6-designs, i.e. for 6-(v,k,@l) designs with @l=1, except possibly when the group is P@CL(2,p^e) with p=2 or 3, and e is an odd prime power.",
"One of the most central and long-standing open questions in combinatorial design theory concerns the existence of Steiner t -designs for large values of t . Although in his classical 1987 paper, L. Teirlinck has shown that non-trivial t -designs exist for all values of t , no non-trivial Steiner t -design with t > 5 has been constructed until now. Understandingly, the case t = 6 has received considerable attention. There has been recent progress concerning the existence of highly symmetric Steiner 6-designs: It is shown in [M. Huber, J. Algebr. Comb. 26 (2007), pp. 453---476] that no non-trivial flag-transitive Steiner 6-design can exist. In this paper, we announce that essentially also no block-transitive Steiner 6-design can exist."
]
}
|
1003.2012
|
2113561748
|
We present a flexible interactive 3D morpho-kinematical modeling application for astrophysics. Compared to other systems, our application reduces the restrictions on the physical assumptions, data type, and amount that is required for a reconstruction of an object's morphology. It is one of the first publicly available tools to apply interactive graphics to astrophysical modeling. The tool allows astrophysicists to provide a priori knowledge about the object by interactively defining 3D structural elements. By direct comparison of model prediction with observational data, model parameters can then be automatically optimized to fit the observation. The tool has already been successfully used in a number of astrophysical research projects.
|
Doppler shift methods like the one by @cite_2 @cite_6 make use of this correlation in order to derive depth information from Doppler-shift data. If the assumption of linear dependence between the position and velocity vector holds, a linear mapping exists between the Doppler--shift (i.e. velocity along the line of sight) and the position along the same direction (see Figure , bottom). In this case the resulting models are accurate within the limits of the accuracy of the Earth-bound observational data. Unfortunately, many objects contain several different kinematic subsystems which may have different relations between velocity and position. Some also show complex interactions with their local environment which may further complicate the velocity law @cite_37 . Furthermore, these methods require an almost complete coverage of the object with regularly spaced observations of the Doppler-shift, which require special observing programs. Such homogeneous data sets are rarely available.
|
{
"cite_N": [
"@cite_37",
"@cite_6",
"@cite_2"
],
"mid": [
"2036515059",
"2950588829",
"1986104079"
],
"abstract": [
"Based on axisymmetric hydrodynamical simulations and three-dimensional (3D) reconstructions with Shape, we investigate the kinematical signatures of deviations from homologous (Hubble-type) outflows in some typical shapes of planetary nebulae (PNs). We find that, in most situations considered in our simulations, the deviations from a Hubble-type flow are significant and observable. The deviations are systematic and a simple parameterization of them considerably improves morphokinematical models of the simulations. We describe such extensions to a homologous expansion law that capture the global velocity structure of hydrodynamical axisymmetric nebulae during their wind-blown phase. It is the size of the poloidal velocity component that strongly influences the shape of the position-velocity diagrams that are obtained, not so much the variation of the radial component. The deviations increase with the degree of collimation of the nebula and they are stronger at intermediate latitudes. We describe potential deformations which these deviations might produce in 3D reconstructions that assume Hubble-type outflows. The general conclusion is that detailed morphokinematical observations and modeling of PNs can reveal whether a nebula is still in a hydrodynamically active stage (windy phase) or whether it has reached ballistic expansion.",
"",
"On a obtenu les champs des vitesses d'expansion [OIII] et Hα dans les nebuleuses planetaires NGC 6058 et 6804 et les champs des vitesses d'expansion [OIII], Hα et [NII] dans NGC 6309 6751 et 6818, a partir de spectres de haute dispersion. On a obtenu des modeles spatiocinematiques des nebuleuses en supposant une vitesse d'expansion du gaz proportionnelle a la distance a l'etoile centrale et en utilisant la correlation entre le rayon et la vitesse d'expansion donnee par Sabbadin et coll. (1984)"
]
}
|
1003.2012
|
2113561748
|
We present a flexible interactive 3D morpho-kinematical modeling application for astrophysics. Compared to other systems, our application reduces the restrictions on the physical assumptions, data type, and amount that is required for a reconstruction of an object's morphology. It is one of the first publicly available tools to apply interactive graphics to astrophysical modeling. The tool allows astrophysicists to provide a priori knowledge about the object by interactively defining 3D structural elements. By direct comparison of model prediction with observational data, model parameters can then be automatically optimized to fit the observation. The tool has already been successfully used in a number of astrophysical research projects.
|
Other algorithms that are more oriented towards high visual quality than physical accuracy of the results are based on symmetry constraints ( @cite_21 @cite_25 , Lin t u et al. @cite_7 @cite_29 , @cite_30 ). Many astronomical nebulae show an inherent spherical or axial symmetry due to their evolution from more or less symmetrical sources. This symmetry assumption may be used to reconstruct the missing spatial dimension @cite_15 .
|
{
"cite_N": [
"@cite_30",
"@cite_7",
"@cite_29",
"@cite_21",
"@cite_15",
"@cite_25"
],
"mid": [
"2145457898",
"1568111879",
"2139787017",
"2151637322",
"92398505",
"2108210481"
],
"abstract": [
"Distant astrophysical objects like planetary nebulae can normally only be observed from a single point of view. Assuming a cylindrically symmetric geometry, one can nevertheless create 3D models of those objects using tomographic methods. We solve the resulting algebraic equations efficiently on graphics hardware. Small deviations from axial symmetry are then corrected using heuristic methods, because the arising 3D models are, in general, no longer unambiguously defined. We visualize the models using real-time volume rendering. Models for actual planetary nebulae created by this approach match the observational data acquired from the earth’s viewpoint, while also looking plausible from other viewpoints for which no experimental data is available.",
"We describe a de-projection method to recover the 3D distribution of the ionized gas and dust component in planetary nebulae. Based on observations in the optical and radio regime, we propose an analysis-by-synthesis approach to obtain physically consistent spatial distributions that take extinction as well as scattering into account. As input we require two calibrated data sets of the same planetary nebula, a radio free-free emission map and a hydrogen recombination line map. From the radio free-free emission map, we first recover the density distribution of the ionized gas component using non-linear optimization while enforcing symmetry constraints. In a second step, we compare the recovered gas distribution to the input hydrogen recombination line map and optimize for the density distribution of the dust component considering extinction as well as scattering.",
"This paper addresses the problem of reconstructing the 3D structure of planetary nebulae from 2D observations. Assuming axial symmetry, our method jointly reconstructs the distribution of dust and ionized gas in the nebulae from observations at two different wavelengths. In an inverse rendering framework we optimize for the emission and absorption densities which are correlated to the gas and dust distribution present in the nebulae. First, the density distribution of the dust component is estimated based on an infrared image, which traces only the dust distribution due to its intrinsic temperature. In a second step, we optimize for the gas distribution by comparing the rendering of the nebula to the visible wavelength image. During this step, besides the emission of the ionized gas, we further include the effect of absorption and scattering due to the already estimated dust distribution. Using the same approach, we can as well start with a radio image from which the gas distribution is derived without absorption, then deriving the dust distribution from the visible wavelength image considering absorption and scattering. The intermediate steps and the final reconstruction results are visualized at real-time frame rates using a volume renderer. Using our method we recover both gas and dust density distributions present in the nebula by exploiting the distinct absorption or emission parameters at different wavelengths.",
"Determining the three-dimensional structure of distant astronomical objects is a challenging task, given that terrestrial observations provide only one viewpoint. For this task, bipolar planetary nebulae are interesting objects of study because of their pronounced axial symmetry due to fundamental physical processes. Making use of this symmetry constraint, we present a technique to automatically recover the axisymmetric structure of bipolar planetary nebulae from two-dimensional images. With GPU-based volume rendering driving a non-linear optimization, we estimate the nebulaýs local emission density as a function of its radial and axial coordinates, and we recover the orientation of the nebula relative to Earth. The optimization refines the nebula model and its orientation by minimizing the differences between the rendered image and the original astronomical image. The resulting model enables realistic 3D visualizations of planetary nebulae, e.g. for educational purposes in planetarium shows. In addition, the recovered spatial distribution of the emissive gas allows validating computer simulation results of the astrophysical formation processes of planetary nebulae.",
"",
"From our terrestrially confined viewpoint, the actual three-dimensional shape of distant astronomical objects is, in general, very challenging to determine. For one class of astronomical objects, however, spatial structure can be recovered from conventional 2D images alone. So-called planetary nebulae (PNe) exhibit pronounced symmetry characteristics that come about due to fundamental physical processes. Making use of this symmetry constraint, we present a technique to automatically recover the axisymmetric structure of many planetary nebulae from photographs. With GPU-based volume rendering driving a nonlinear optimization, we estimate the nebula's local emission density as a function of its radial and axial coordinates and we recover the orientation of the nebula relative to Earth. The optimization refines the nebula model and its orientation by minimizing the differences between the rendered image and the original astronomical image. The resulting model allows creating realistic 3D visualizations of these nebulae, for example, for planetarium shows and other educational purposes. In addition, the recovered spatial distribution of the emissive gas can help astrophysicists gain deeper insight into the formation processes of planetary nebulae."
]
}
|
1003.2012
|
2113561748
|
We present a flexible interactive 3D morpho-kinematical modeling application for astrophysics. Compared to other systems, our application reduces the restrictions on the physical assumptions, data type, and amount that is required for a reconstruction of an object's morphology. It is one of the first publicly available tools to apply interactive graphics to astrophysical modeling. The tool allows astrophysicists to provide a priori knowledge about the object by interactively defining 3D structural elements. By direct comparison of model prediction with observational data, model parameters can then be automatically optimized to fit the observation. The tool has already been successfully used in a number of astrophysical research projects.
|
The above-mentioned shortcomings of fully automatic reconstruction approaches may be avoided by resorting to interactive modeling techniques. The interactive creation of a model aiming to reproduce a given single image is a common task, but most existing solutions are not well suited for the modeling of emissive transparent objects that are prevalent in astronomy and do not allow for representation of velocity information and spectral data. Among the tools that most closely reflect our modeling approach are the interactive approaches of @cite_34 , Fran c ois and Medioni @cite_22 and @cite_12 , all of which expect some kind of user-specified coarse geometry or a set of user-defined geometry constraints which is then automatically converted into a full three-dimensional model that best fits the provided image under the given constraints. The idea of automatic optimization of a parameterized (deformable or morphable'') model has been successfully employed in the works of Montagnat and Delingette @cite_27 and Romdhani and Vetter @cite_18 .
|
{
"cite_N": [
"@cite_18",
"@cite_22",
"@cite_27",
"@cite_34",
"@cite_12"
],
"mid": [
"2097365005",
"1994366794",
"1997820584",
"1971719398",
"2103159399"
],
"abstract": [
"3D morphable models, as a means to generate images of a class of objects and to analyze them, have become increasingly popular. The problematic part of this framework is the registration of the model to an image, a.k.a. the fitting. The characteristic features of a fitting algorithm are its efficiency, robustness, accuracy and automation. Many accurate algorithms based on gradient descent techniques exist which are unfortunately short on the other features. Recently, an efficient algorithm called inverse compositional image alignment (ICIA) algorithm, able to fit 2D images, was introduced. We extent this algorithm to fit 3D morphable models using a novel mathematical notation which facilitates the formulation of the fitting problem. This formulation enables us to avoid a simplification so far used in the ICIA, being as efficient and leading to improved fitting precision. Additionally, the algorithm is robust without sacrificing its efficiency and accuracy, thereby conforming to three of the four characteristics of a good fitting algorithm.",
"Abstract We present a system at the junction between Computer Vision and Computer Graphics, to produce a three-dimensional (3D) model of an object as observed in a single image, with a minimum of high-level interaction from a user. The input to our system is a single image. First, the user points, coarsely, at image features (edges) that are subsequently automatically and reproducibly extracted in real-time. The user then performs a high level labeling of the curves (e.g. limb edge, cross-section) and specifies relations between edges (e.g. symmetry, surface or part). NURBS are used as working representation of image edges. The objects described by the user specified, qualitative relationships are then reconstructed either as a set of connected parts modeled as Generalized Cylinders, or as a set of 3D surfaces for 3D bilateral symmetric objects. In both cases, the texture is also extracted from the image. Our system runs in real-time on a PC.",
"Abstract To achieve geometric reconstruction from 3D datasets two complementary approaches have been widely used. On one hand, the deformable model framework locally applies forces to fit the data. On the other hand, the non-rigid registration framework computes a global transformation minimizing the distance between a template and the data. We first show that applying a global transformation on a surface template, is equivalent to applying certain global forces on a deformable model. Second, we propose a scheme which combines the registration and free-form deformation. This globally constrained deformation model allows us to control the amount of deformation from the reference shape with a single parameter. Finally, we propose a general algorithm for performing model-based reconstruction in a robust and accurate manner. Examples on both range data and medical images are used to illustrate and validate the globally constrained deformation framework.",
"We present a new approach for modeling and rendering existing architectural scenes from a sparse set of still photographs. Our modeling approach, which combines both geometry-based and imagebased techniques, has two components. The first component is a photogrammetricmodeling method which facilitates the recovery of the basic geometry of the photographed scene. Our photogrammetric modeling approach is effective, convenient, and robust because it exploits the constraints that are characteristic of architectural scenes. The second component is a model-based stereo algorithm, which recovers how the real scene deviates from the basic model. By making use of the model, our stereo technique robustly recovers accurate depth from widely-spaced image pairs. Consequently, our approach can model large architectural environments with far fewer photographs than current image-based modeling approaches. For producing renderings, we present view-dependent texture mapping, a method of compositing multiple views of a scene that better simulates geometric detail on basic models. Our approach can be used to recover models for use in either geometry-based or image-based rendering systems. We present results that demonstrate our approach’s ability to create realistic renderings of architectural scenes from viewpoints far from the original photographs. CR Descriptors: I.2.10 [Artificial Intelligence]: Vision and Scene Understanding Modeling and recovery of physical attributes; I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism Color, shading, shadowing, and texture I.4.8 [Image Processing]: Scene Analysis Stereo; J.6 [Computer-Aided Engineering]: Computer-aided design (CAD).",
"This paper presents a novel approach for reconstructing free-form, texture-mapped, 3D scene models from a single painting or photograph. Given a sparse set of user-specified constraints on the local shape of the scene, a smooth 3D surface that satisfies the constraints is generated This problem is formulated as a constrained variational optimization problem. In contrast to previous work in single view reconstruction, our technique enables high quality reconstructions of free-form curved surfaces with arbitrary reflectance properties. A key feature of the approach is a novel hierarchical transformation technique for accelerating convergence on a non-uniform, piecewise continuous grid. The technique is interactive and updates the model in real time as constraints are added, allowing fast reconstruction of photorealistic scene models. The approach is shown to yield high quality results on a large variety of images."
]
}
|
1003.2012
|
2113561748
|
We present a flexible interactive 3D morpho-kinematical modeling application for astrophysics. Compared to other systems, our application reduces the restrictions on the physical assumptions, data type, and amount that is required for a reconstruction of an object's morphology. It is one of the first publicly available tools to apply interactive graphics to astrophysical modeling. The tool allows astrophysicists to provide a priori knowledge about the object by interactively defining 3D structural elements. By direct comparison of model prediction with observational data, model parameters can then be automatically optimized to fit the observation. The tool has already been successfully used in a number of astrophysical research projects.
|
An early tool for the rendering part of the astrophysical modeling process was our earlier work @cite_8 which was able to reproduce many standard forms of observational data from a given model. Similar codes have been used by Santander-Garc ' @cite_19 and @cite_28 . The model itself, however, still had to be hard-coded into the program, making the modeling part inherently cumbersome. Steffen and L 'opez @cite_17 later incorporated their spectral renderer into a commercial modeling system as a plugin. This simplified the modeling process to a large extent, but performance and usability were still far from the quality of an integrated modeling and rendering system.
|
{
"cite_N": [
"@cite_28",
"@cite_19",
"@cite_17",
"@cite_8"
],
"mid": [
"2009641212",
"2135034002",
"2042773618",
"2127948648"
],
"abstract": [
"We present an atlas of Hubble Space Telescope images and ground-based, long-slit, narrowband spectra centered on the 6584 A line of [N II] and the 5007 A line of [O III]. The spectra were obtained for a variety of slit positions across each target (as shown on the images) in an effort to account for nonspherical nebular geometries in a robust manner. We have extended the prolate ellipsoidal shell model originally devised by Aaquist, Zhang, and Kwok to generate synthetic images, as well as long-slit spectra. Using this model, we have derived basic parameters for the subsample of PNe that present ellipsoidal appearances and regular kinematic patterns. We find differences between our parameters for the target PNe as compared to those of previous studies, which we attribute to increased spatial resolution for our image data and the inclusion of kinematic data in the model fits. The data and analysis presented in this paper can be combined with detections of nebular angular expansion rates to determine precise distances to the PN targets.",
"The structure and kinematics of the bipolar nebula Mz 3 have been investigated by means of HST, CTIO and ESO images and spectra. At least four distinct outflows have been identified which, from the inside to the outside, are the following: a pair of bright bipolar lobes, two opposite highly collimated column-shaped outflows, a conical system of radial structure, and a very dim, previously unnoticed, low-latitude and flattened (ring-like) radial outflow. A simple Hubble-law describes the velocity field of the ballisticaly expanding lobes, columns and rays, suggesting that their shaping has being done at very early stages of evolution, in a sort of eruptive events with increasing degree of collimation and expansion ages ranging from ∼600 for the inner structures to ∼1600 years (per kpc to the nebula) for the largest ones.",
"We present a powerful new tool to analyse and disentangle the 3-D geometry and kinematic structure of gaseous nebulae. The method consists in combining commercially available digital animation software to simulate the 3-D structure and expansion pattern of the nebula with a dedicated, purpose-built rendering software that produces the final images and long slit spectra which are compared to the real data. We show results for the complex planetary nebulae NGC 6369 and Abell 30",
"ABSTRACT We have used high resolution longslit spectroscopy to investigate the ionized gasin the active galaxy IRAS 04210+0400 and its association with the radio structure.We suggest that two of the ionized components are associated with the centraldouble radio source and observe that the relative positions of these components varyfor different emission lines. Both results are consistent with the radio componentsrepresenting the working surfaces of a pair of jets emerging from the centre of thegalaxy. In this scenario, the optical emission in the centre arises behind the bowshocksproduced by the jets in the interstellar medium.The emission lines are detected and show a dramatic (≈900 km s −1 ) spreadin velocity at the position of the radio lobe hotspots. We suggest a model whichexplains this phenomenon as the result of a jet head emerging through the boundarybetween the interstellar and intergalactic medium. A similar scenario has previouslybeen suggested as a model to explain wide angle tail radio sources (WAT’s). Based onthis model, we simulate the longslit spectra of these regions and compare the resultswith the observations.Key words: galaxies: active - galaxies: individual: IRAS 04210+0400 - galaxies: jets- galaxies: kinematics and dynamics - galaxies: Seyfert"
]
}
|
1003.2012
|
2113561748
|
We present a flexible interactive 3D morpho-kinematical modeling application for astrophysics. Compared to other systems, our application reduces the restrictions on the physical assumptions, data type, and amount that is required for a reconstruction of an object's morphology. It is one of the first publicly available tools to apply interactive graphics to astrophysical modeling. The tool allows astrophysicists to provide a priori knowledge about the object by interactively defining 3D structural elements. By direct comparison of model prediction with observational data, model parameters can then be automatically optimized to fit the observation. The tool has already been successfully used in a number of astrophysical research projects.
|
The only astrophysics-suited tool that we know of which follows a paradigm similar to that of our earlier work is which is under development at MIT @cite_20 @cite_4 . Contrary to , focuses on X-ray data instead of the near-visible wavelengths which are suitable for realistic visualizations aimed at public media like planetaria @cite_9 . Also, the tool does not provide an interactive modeling system, but models are defined using scripting and Constructive Solid Geometry (CSG).
|
{
"cite_N": [
"@cite_9",
"@cite_4",
"@cite_20"
],
"mid": [
"91474661",
"",
"1855128972"
],
"abstract": [
"Contemporary challenges in the production of digital planetarium shows include real-time rendering realism as well as the creation of authentic content. While interactive, live performance is a standard feature of professional digital-dome planetarium software today, support for physically correct rendering of astrophysical phenomena is still often limited. Similarly, the tools currently available for planetarium show production do not offer much assistance towards creating scientifically accurate models of astronomical objects. Our paper presents recent results from computer graphics research, offering solutions to contemporary challenges in digital planetarium rendering and modeling. Incorporating these algorithms into the next generation of dome display software and production tools will help advance digital planetariums toward make full use of their potential.",
"",
"Astronomical data generally consists of 2 or more high-resolution axes, e.g., X,Y position on the sky or wavelength and position-along-one-axis (long-slit spectrometer). Analyzing these multi-dimension observations requires combining 3D source models (including velocity effects), instrument models, and multi-dimensional data comparison and fitting. A prototype of such a \"Beyond XSPEC\" (Noble & Nowak, 2008) system is presented here using Chandra imag- ing and dispersed HETG grating data. Techniques used include: Monte Carlo event generation, chi-squared comparison, conjugate gradient fitting adapted to the Monte Carlo characteristics, and informative visualizations at each step. These simple baby steps of progress only scratch the surface of the computational potential that is available these days for astronomical analysis."
]
}
|
1003.0952
|
1533938001
|
We consider the problem of developing an ecient multi-threaded implementation of the matrix-vector multiplication algorithm for sparse matrices with structural symmetry. Matrices are stored using the compressed sparse row-column format (CSRC), designed for proting from the symmetric non-zero pattern observed in global nite element matrices. Unlike classical compressed storage formats, performing the sparse matrix-vector product using the CSRC requires thread-safe access to the destination vector. To avoid race conditions, we have implemented two partitioning strategies. In the rst one, each thread allocates an array for storing its contributions, which are later combined in an accumulation step. We analyze how to perform this accumulation in four dierent ways. The second strategy employs a coloring algorithm for grouping rows that can be concurrently processed by threads. Our results indicate that, although incurring an increase in the working set size, the former approach leads to the best performance improvements for most matrices.
|
Parallel sparse matrix-vector multiplication using CSR-like data structures on multi-processed machines has been the focus of a number of researchers since the 1990s. Early attempts to date include the paper by C ataly "urek and Aykanat @cite_7 , on hypergraph models applied to the matrix partitioning problem, Im and Yelick @cite_1 , who analysed the effect of register cache blocking and reordering, and Geus and R " o llin @cite_25 , considering prefetching, register blocking and reordering for symmetric matrices. @cite_31 also examined several storage formats on a ccNUMA machine, which required the ability of dealing with page allocation mechanisms.
|
{
"cite_N": [
"@cite_31",
"@cite_1",
"@cite_25",
"@cite_7"
],
"mid": [
"1537296367",
"1207115811",
"2062240003",
"1499467741"
],
"abstract": [
"The present paper discusses scalable implementations of sparse matrix-vector products, which are crucial for high performance solutions of large-scale linear equations, on a cc-NUMA machine SGI Altix3700. Three storage formats for sparse matrices are evaluated, and scalability is attained by implementations considering the page allocation mechanism of the NUMA machine. Influences of the cache memory bus architectures on the optimum choice of the storage format are examined, and scalable converters between storage formats shown to facilitate exploitation of storage formats of higher performance.",
"",
"Abstract The sparse matrix–vector product is an important computational kernel that runs ineffectively on many computers with super-scalar RISC processors. In this paper we analyse the performance of the sparse matrix–vector product with symmetric matrices originating from the FEM and describe techniques that lead to a fast implementation. It is shown how these optimisations can be incorporated into an efficient parallel implementation using message-passing. We conduct numerical experiments on many different machines and show that our optimisations speed up the sparse matrix–vector multiplication substantially.",
"In this work, we show the deficiencies of the graph model for decomposing sparse matrices for parallel matrix-vector multiplication. Then, we propose two hypergraph models which avoid all deficiencies of the graph model. The proposed models reduce the decomposition problem to the well-known hypergraph partitioning problem widely encountered in circuit partitioning in VLSI. We have implemented fast Kernighan-Lin based graph and hypergraph partitioning heuristics and used the successful multilevel graph partitioning tool (Metis) for the experimental evaluation of the validity of the proposed hypergraph models. We have also developed a multilevel hypergraph partitioning heuristic for experimenting the performance of the multilevel approach on hypergraph partitioning. Experimental results on sparse matrices, selected from Harwell-Boeing collection and NETLIB suite, confirm both the validity of our proposed hypergraph models and appropriateness of the multilevel approach to hypergraph partitioning."
]
}
|
1003.0952
|
1533938001
|
We consider the problem of developing an ecient multi-threaded implementation of the matrix-vector multiplication algorithm for sparse matrices with structural symmetry. Matrices are stored using the compressed sparse row-column format (CSRC), designed for proting from the symmetric non-zero pattern observed in global nite element matrices. Unlike classical compressed storage formats, performing the sparse matrix-vector product using the CSRC requires thread-safe access to the destination vector. To avoid race conditions, we have implemented two partitioning strategies. In the rst one, each thread allocates an array for storing its contributions, which are later combined in an accumulation step. We analyze how to perform this accumulation in four dierent ways. The second strategy employs a coloring algorithm for grouping rows that can be concurrently processed by threads. Our results indicate that, although incurring an increase in the working set size, the former approach leads to the best performance improvements for most matrices.
|
Regarding modern multi-core platforms, the work of @cite_9 contains a thorough analysis of a number of factors that may degrade the performance of both sequential and multi-thread implementations. Performance tests were carried out on three different platforms, including SMP, SMT and ccNUMA systems. Two partitioning schemes were implemented, one guided by the number of rows and the other by the number of non-zeros per thread. It was observed that the later approach contributes to a better load balancing, thus improving significantly the running time. For large matrices, they obtained average speedups of 1.96 and 2.13 using 2 and 4 threads, respectively, on an Intel Core 2 Xeon. In this platform, their code reached about 1612 Mflop s for 2 threads, and 2967 Mflop s when spawning 4 threads. This performance changes considerably when considering matrices whose working set sizes are far from fitting in cache. In particular, it drops to around 815 Mflop s and 849 Mflop s, corresponding to the 2- and 4-threaded cases.
|
{
"cite_N": [
"@cite_9"
],
"mid": [
"1975116854"
],
"abstract": [
"In this paper, we revisit the performance issues of the widely used sparse matrix-vector multiplication (SpMxV) kernel on modern microarchitectures. Previous scientific work reports a number of different factors that may significantly reduce performance. However, the interaction of these factors with the underlying architectural characteristics is not clearly understood, a fact that may lead to misguided, and thus unsuccessful attempts for optimization. In order to gain an insight into the details of SpMxV performance, we conduct a suite of experiments on a rich set of matrices for three different commodity hardware platforms. In addition, we investigate the parallel version of the kernel and report on the corresponding performance results and their relation to each architecture's specific multithreaded configuration. Based on our experiments, we extract useful conclusions that can serve as guidelines for the optimization process of both single and multithreaded versions of the kernel."
]
}
|
1003.0952
|
1533938001
|
We consider the problem of developing an ecient multi-threaded implementation of the matrix-vector multiplication algorithm for sparse matrices with structural symmetry. Matrices are stored using the compressed sparse row-column format (CSRC), designed for proting from the symmetric non-zero pattern observed in global nite element matrices. Unlike classical compressed storage formats, performing the sparse matrix-vector product using the CSRC requires thread-safe access to the destination vector. To avoid race conditions, we have implemented two partitioning strategies. In the rst one, each thread allocates an array for storing its contributions, which are later combined in an accumulation step. We analyze how to perform this accumulation in four dierent ways. The second strategy employs a coloring algorithm for grouping rows that can be concurrently processed by threads. Our results indicate that, although incurring an increase in the working set size, the former approach leads to the best performance improvements for most matrices.
|
Memory contention is viewed as the major bottleneck of implementations of the sparse matrix-vector product. This problem was tackled by @cite_13 via compression techniques, reducing both the matrix connectivity and floating-point numbers to be stored. Although leading to good scalability, they obtained at most a 2-fold speedup on 8 threads, for matrices out of cache. The experiments were conducted on two Intel Clovertown with 4MB of L2 cache each. In the same direction, @cite_23 proposed a pattern-based blocking scheme for reducing the index overhead. Accompanied by software prefetching and vectorization techniques, they attained an average sequential speedup of 1.4. Their multi-thread implementation required the synchronization of the accesses to the @math vector. In brief, each thread maintains a private vector for storing partial values, which are summed up in a reduction step into the global destination vector. They observed average speedups around 1.04, 1.11 and 2.3 when spawning 2, 4, and 8 threads, respectively. These results were obtained on a 2-socket Intel Harpertown 5400 with 8GB of RAM and 12MB L2 cache per socket.
|
{
"cite_N": [
"@cite_13",
"@cite_23"
],
"mid": [
"2146530035",
"2072806558"
],
"abstract": [
"The sparse matrix-vector multiplication kernel exhibits limited potential for taking advantage of modern shared memory architectures due to its large memory bandwidth requirements. To decrease memory contention and improve the performance of the kernel we propose two compression schemes. The first, called CSR-DU, targets the reduction of the matrix structural data by applying coarse grain delta encoding for the column indices. The second scheme, called CSR-VI, targets the reduction of the numerical values using indirect indexing and can only be applied to matrices which contain a small number of unique values. Evaluation of both methods on a rich matrix set showed that they can significantly improve the performance of the multithreaded version of the kernel and achieve good scalability for large matrices.",
"Pattern-based Representation (PBR) is a novel approach to improving the performance of Sparse Matrix-Vector Multiply (SMVM) numerical kernels. Motivated by our observation that many matrices can be divided into blocks that share a small number of distinct patterns, we generate custom multiplication kernels for frequently recurring block patterns. The resulting reduction in index overhead significantly reduces memory bandwidth requirements and improves performance. Unlike existing methods, PBR requires neither detection of dense blocks nor zero filling, making it particularly advantageous for matrices that lack dense nonzero concentrations. SMVM kernels for PBR can benefit from explicit prefetching and vectorization, and are amenable to parallelization. We present sequential and parallel performance results for PBR on two current multicore architectures, which show that PBR outperforms available alternatives for the matrices to which it is applicable."
]
}
|
1003.0952
|
1533938001
|
We consider the problem of developing an ecient multi-threaded implementation of the matrix-vector multiplication algorithm for sparse matrices with structural symmetry. Matrices are stored using the compressed sparse row-column format (CSRC), designed for proting from the symmetric non-zero pattern observed in global nite element matrices. Unlike classical compressed storage formats, performing the sparse matrix-vector product using the CSRC requires thread-safe access to the destination vector. To avoid race conditions, we have implemented two partitioning strategies. In the rst one, each thread allocates an array for storing its contributions, which are later combined in an accumulation step. We analyze how to perform this accumulation in four dierent ways. The second strategy employs a coloring algorithm for grouping rows that can be concurrently processed by threads. Our results indicate that, although incurring an increase in the working set size, the former approach leads to the best performance improvements for most matrices.
|
Different row-wise partitioning methods were considered by @cite_2 . Besides evenly splitting non-zeros among threads, they evaluated the effect of the automatic scheduling mechanisms provided by OpenMP, namely, the , and schedules. Once more, the non-zero strategy was the best choice. They also parallelized the block CSR format. Experiments were run on four AMD Opteron 870 dual-core processors, with 16GB of RAM and @math MB L2 caches. Both CSR and block CSR schemes resulted in poor scalability for large matrices, for which the maximum speedup was approximately 2, using 8 threads.
|
{
"cite_N": [
"@cite_2"
],
"mid": [
"2120610962"
],
"abstract": [
"Sparse matrix-vector multiplication is an important computational kernel in scientific applications. However, it performs poorly on modern processors because of a low compute-to-memory ratio and its irregular memory access patterns. This paper discusses the implementations of sparse matrix-vector algorithm using OpenMP to execute iterative methods on the Dawning S4800A1. Two storage formats (CSR and BCSR) for sparse matrices and three scheduling schemes (static, dynamic and guided) provided by the standard OpenMP are evaluated. We also compared these three schemes with non-zero scheduling, where each thread is assigned approximately the same number of non-zero elements. Experimental data shows that, the non-zero scheduling can provide the best performance in most cases. The current implementation provides satisfactory scalability for most of matrices. However, we only get a limited speedup for some large matrices that contain millions of non-zero elements."
]
}
|
1003.0952
|
1533938001
|
We consider the problem of developing an ecient multi-threaded implementation of the matrix-vector multiplication algorithm for sparse matrices with structural symmetry. Matrices are stored using the compressed sparse row-column format (CSRC), designed for proting from the symmetric non-zero pattern observed in global nite element matrices. Unlike classical compressed storage formats, performing the sparse matrix-vector product using the CSRC requires thread-safe access to the destination vector. To avoid race conditions, we have implemented two partitioning strategies. In the rst one, each thread allocates an array for storing its contributions, which are later combined in an accumulation step. We analyze how to perform this accumulation in four dierent ways. The second strategy employs a coloring algorithm for grouping rows that can be concurrently processed by threads. Our results indicate that, although incurring an increase in the working set size, the former approach leads to the best performance improvements for most matrices.
|
@cite_27 evaluated the sparse matrix-vector kernel using the CSR format on several up-to-date chip multiprocessor systems, such as the heterogeneous STI Cell. They examined the effect of various optimization techniques on the performance of a multi-thread CSR, including software pipelining, branch elimination, SIMDization, explicit prefetching, 16-bit indices, and register, cache and translation lookaside buffer (TLB) blocking. A row-wise approach was employed for thread scheduling. As regarding finite element matrices and in comparison to OSKI @cite_21 , speedups for the fully tuned parallel code ranged from 1.8 to 5.5 using 8 threads on an Intel Xeon E5345.
|
{
"cite_N": [
"@cite_27",
"@cite_21"
],
"mid": [
"2103877122",
"2099625934"
],
"abstract": [
"We are witnessing a dramatic change in computer architecture due to the multicore paradigm shift, as every electronic device from cell phones to supercomputers confronts parallelism of unprecedented scale. To fully unleash the potential of these systems, the HPC community must develop multicore specific-optimization methodologies for important scientific computations. In this work, we examine sparse matrix-vector multiply (SpMV) - one of the most heavily used kernels in scientific computing - across a broad spectrum of multicore designs. Our experimental platform includes the homogeneous AMD quad-core, AMD dual-core, and Intel quad-core designs, the heterogeneous STI Cell, as well as one of the first scientific studies of the highly multithreaded Sun Victoria Falls (a Niagara2 SMP). We present several optimization strategies especially effective for the multicore environment, and demonstrate significant performance improvements compared to existing state-of-the-art serial and parallel SpMV implementations. Additionally, we present key insights into the architectural trade-offs of leading multicore design strategies, in the context of demanding memory-bound numerical algorithms.",
"The Optimized Sparse Kernel Interface (OSKI) is a collection of low-level primitives that provide automatically tuned computational kernels on sparse matrices, for use by solver libraries and applications. These kernels include sparse matrix-vector multiply and sparse triangular solve, among others. The primary aim of this interface is to hide the complex decision-making process needed to tune the performance of a kernel implementation for a particular user's sparse matrix and machine, while also exposing the steps and potentially non-trivial costs of tuning at run-time. This paper provides an overview of OSKI, which is based on our research on automatically tuned sparse kernels for modern cache-based superscalar machines."
]
}
|
1003.0952
|
1533938001
|
We consider the problem of developing an ecient multi-threaded implementation of the matrix-vector multiplication algorithm for sparse matrices with structural symmetry. Matrices are stored using the compressed sparse row-column format (CSRC), designed for proting from the symmetric non-zero pattern observed in global nite element matrices. Unlike classical compressed storage formats, performing the sparse matrix-vector product using the CSRC requires thread-safe access to the destination vector. To avoid race conditions, we have implemented two partitioning strategies. In the rst one, each thread allocates an array for storing its contributions, which are later combined in an accumulation step. We analyze how to perform this accumulation in four dierent ways. The second strategy employs a coloring algorithm for grouping rows that can be concurrently processed by threads. Our results indicate that, although incurring an increase in the working set size, the former approach leads to the best performance improvements for most matrices.
|
More recently, Bulu c et al @cite_18 have presented a block structure that allows efficient computation of both @math and @math in parallel. It can be roughly seen as a dense collection of sparse blocks, rather than a sparse collection of dense blocks, as in the standard block CSR format. In sequential experiments carried out on an ccNUMA machine featuring AMD Opteron 8214 processors, there were no improvements over the standard CSR. In fact, their data structure was always slower for band matrices. Concerning its parallelization, however, it was proved that it yields a parallelism of @math . In practice, it scaled up to 4 threads on an Intel Xeon X5460, and presented linear speedups on an AMD Opteron 8214 and an Intel Core i7 920. On the later, where the best results were attained, it reached speedups of 1.86, 2.97 and 3.71 using 2, 4 and 8 threads, respectively. However, it does not seem to straightly allow the simultaneous computation of @math and @math in a single loop, as CSRC does.
|
{
"cite_N": [
"@cite_18"
],
"mid": [
"2126004407"
],
"abstract": [
"This paper introduces a storage format for sparse matrices, called compressed sparse blocks (CSB), which allows both Ax and A,x to be computed efficiently in parallel, where A is an n×n sparse matrix with nnzen nonzeros and x is a dense n-vector. Our algorithms use Θ(nnz) work (serial running time) and Θ(√nlgn) span (critical-path length), yielding a parallelism of Θ(nnz √nlgn), which is amply high for virtually any large matrix. The storage requirement for CSB is the same as that for the more-standard compressed-sparse-rows (CSR) format, for which computing Ax in parallel is easy but A,x is difficult. Benchmark results indicate that on one processor, the CSB algorithms for Ax and A,x run just as fast as the CSR algorithm for Ax, but the CSB algorithms also scale up linearly with processors until limited by off-chip memory bandwidth."
]
}
|
1003.0628
|
1485642969
|
Text documents are complex high dimensional objects. To effectively visualize such data it is important to reduce its dimensionality and visualize the low dimensional embedding as a 2-D or 3-D scatter plot. In this paper we explore dimensionality reduction methods that draw upon domain knowledge in order to achieve a better low dimensional embedding and visualization of documents. We consider the use of geometries specified manually by an expert, geometries derived automatically from corpus statistics, and geometries computed from linguistic resources.
|
We focus in this paper on visualizing a corpus of text documents using a 2-D scatter plot. While this is perhaps the most popular and practical text visualization technique, other methods such as @cite_18 , @cite_9 , @cite_20 , @cite_5 , @cite_4 , @cite_11 exist. It is conceivable that the techniques developed in this paper may be ported to enhance these alternative visualization methods as well.
|
{
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_9",
"@cite_5",
"@cite_20",
"@cite_11"
],
"mid": [
"2143967859",
"1880262756",
"1828401780",
"41137908",
"2106738877",
"2148725305"
],
"abstract": [
"This paper introduces a novel representation, called the InfoCrystal, that can be used as a visualization tool as well as a visual query language to help users search for information. The InfoCrystal visualizes all the possible relationships among N concepts. Users can assign relevance weights to the concepts and use thresholding to select relationships of interest. The InfoCrystal allows users to specify Boolean as well as vector-space queries graphically. Arbitrarily complex queries can be created by using the InfoCrystals as building blocks and organizing them in a hierarchical structure. The InfoCrystal enables users to explore and filter information in a flexible, dynamic and interactive way. >",
"We describe latent Dirichlet allocation (LDA), a generative probabilistic model for collections of discrete data such as text corpora. LDA is a three-level hierarchical Bayesian model, in which each item of a collection is modeled as a finite mixture over an underlying set of topics. Each topic is, in turn, modeled as an infinite mixture over an underlying set of topic probabilities. In the context of text modeling, the topic probabilities provide an explicit representation of a document. We present efficient approximate inference techniques based on variational methods and an EM algorithm for empirical Bayes parameter estimation. We report results in document modeling, text classification, and collaborative filtering, comparing to a mixture of unigrams model and the probabilistic LSI model.",
"TextTiling is a technique for subdividing texts into multi-paragraph units that represent passages, or subtopics. The discourse cues for identifying major subtopic shifts are patterns of lexical co-occurrence and distribution. The algorithm is fully implemented and is shown to produce segmentation that corresponds well to human judgments of the subtopic boundaries of 12 texts. Multi-paragraph subtopic segmentation should be useful for many text analysis tasks, including information retrieval and summarization.",
"TextArc is an alternate view of a text, tailored to expose the frequency and distribution of the words of an entire text on a single page or screen. In texts having no markup or metainformation, one of the quickest ways of getting a feeling for the content of a text is to scan through the words that are used most frequently. Knowing the distribution of those words in the text can support another level of understanding, e.g. helping to reveal chapters in a text that concentrate on a specific subject. A structure and method of displaying an entire text on a single page or screen is presented. It reveals both frequency and distribution, and provides a well-understood and organized space that works as a background for other tools.",
"The ThemeRiver visualization depicts thematic variations over time within a large collection of documents. The thematic changes are shown in the context of a time-line and corresponding external events. The focus on temporal thematic change within a context framework allows a user to discern patterns that suggest relationships or trends. For example, the sudden change of thematic strength following an external event may indicate a causal relationship. Such patterns are not readily accessible in other visualizations of the data. We use a river metaphor to convey several key notions. The document collection's time-line, selected thematic content and thematic strength are indicated by the river's directed flow, composition and changing width, respectively. The directed flow from left to right is interpreted as movement through time and the horizontal distance between two points on the river defines a time interval. At any point in time, the vertical distance, or width, of the river indicates the collective strength of the selected themes. Colored \"currents\" flowing within the river represent individual themes. A current's vertical width narrows or broadens to indicate decreases or increases in the strength of the individual theme.",
"Documents and other categorical valued time series are often characterized by the frequencies of short range sequential patterns such as n-grams. This representation converts sequential data of varying lengths to high dimensional histogram vectors which are easily modeled by standard statistical models. Unfortunately, the histogram representation ignores most of the medium and long range sequential dependencies making it unsuitable for visualizing sequential data. We present a novel framework for sequential visualization of discrete categorical time series based on the idea of local statistical modeling. The framework embeds categorical time series as smooth curves in the multinomial simplex summarizing the progression of sequential trends. We discuss several visualization techniques based on the above framework and demonstrate their usefulness for document visualization."
]
}
|
1003.0723
|
2951413215
|
Communication channel established from a display to a device's camera is known as visual channel, and it is helpful in securing key exchange protocol. In this paper, we study how visual channel can be exploited by a network terminal and mobile device to jointly verify information in an interactive session, and how such information can be jointly presented in a user-friendly manner, taking into account that the mobile device can only capture and display a small region, and the user may only want to authenticate selective regions-of-interests. Motivated by applications in Kiosk computing and multi-factor authentication, we consider three security models: (1) the mobile device is trusted, (2) at most one of the terminal or the mobile device is dishonest, and (3) both the terminal and device are dishonest but they do not collude or communicate. We give two protocols and investigate them under the abovementioned models. We point out a form of replay attack that renders some other straightforward implementations cumbersome to use. To enhance user-friendliness, we propose a solution using visual cues embedded into the 2D barcodes and incorporate the framework of "augmented reality" for easy verifications through visual inspection. We give a proof-of-concept implementation to show that our scheme is feasible in practice.
|
Data can be transmitted to a camera effectively using 2D barcodes. There are many 2D barcode designs, for example, QR code @cite_10 and the High Capacity Color Barcode (HCCB) @cite_4 that uses colored triangles. Many barcodes are designed to encode data in printed copies. There are also proposals that use other types of sources in the visual channel. proposed Screen codes'' @cite_21 for transferring data from a display to a camera-equipped mobile device, where the data are encoded as a grid of luminosity fluctuation within an arbitrary image. A challenging hurdle in using hand-held cameras to establish the channel is motion blur. A few stabilization algorithms are developed for handheld camera @cite_23 @cite_14 , and for 2D barcodes @cite_6 .
|
{
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_21",
"@cite_6",
"@cite_23",
"@cite_10"
],
"mid": [
"2143020811",
"2099066201",
"",
"1967325045",
"2154243452",
""
],
"abstract": [
"The mobile imaging market is a rapidly developing market, and has outgrown the traditional imaging market. This market is dominated by CMOS sensors, with pixels getting small and smaller. As pixel size is reduced, the sensitivity is lowered and must be compensated by longer exposure times. However, in the mobile market, this amount to increased motion blur. We characterize the hand motion with a typical shooting scenario. This data can be used to create an evaluation procedure for image stabilization solutions, and we indeed present one such procedure.",
"A 2D color barcode can hold much more information than a binary barcode. Barcodes are often intended for consumer use where using a cellphone, a consumer can take an image of a barcode on a product, and retrieve relevant information about the product. The barcode must be read using computer vision techniques. While a color barcode can hold more information, it makes this vision task in consumer scenarios unusually challenging. We present our approach to the localization and segmentation of a 2D color barcode in such challenging scenarios, along with its evaluation on a diverse collection of images of Microsoft's recently launched high capacity color barcode (HCCB). We exploit the unique trait of barcode reading: the barcode decoder can give the vision algorithm feedback, and develop a progressive strategy to achieve both - high accuracy in diverse scenarios as well as computational efficiency.",
"",
"With the ubiquitous of cellular phones, mobile applications with 2D barcodes have drawn a lot of attentions in recent years. However, the previous works for extracting 2D barcodes from an image do not consider the distortion resulted from camera shake. Moreover, the previous works for extracting 2D barcodes from an image do not take a complex background into account. In this paper, therefore, we propose an efficient and effective algorithm to extract 2D barcode from a complex background in a camera-shaken image. Compared with previous approaches, our algorithm outperforms in not only smaller running time but also higher accuracy of the barcode recognition.",
"We present an algorithm that uses two or more images of the same scene blurred by camera motion for recovery of 3D scene structure and simultaneous restoration of sharp image. Motion blur is modeled by convolution with space-varying mask that changes its scale with the distance of imaged objects. The mask can be of arbitrary shape corresponding to the integral of the camera path during the pick-up time, which can be measured for instance by inertial sensors. This approach is more general than previously published algorithms that assumed shift-invariant blur or fixed, rectangular or Gaussian, mask shape. Algorithm can be easily parallelized and has a potential to be used in practical applications such as compensation of camera shake during long exposures",
""
]
}
|
1003.0723
|
2951413215
|
Communication channel established from a display to a device's camera is known as visual channel, and it is helpful in securing key exchange protocol. In this paper, we study how visual channel can be exploited by a network terminal and mobile device to jointly verify information in an interactive session, and how such information can be jointly presented in a user-friendly manner, taking into account that the mobile device can only capture and display a small region, and the user may only want to authenticate selective regions-of-interests. Motivated by applications in Kiosk computing and multi-factor authentication, we consider three security models: (1) the mobile device is trusted, (2) at most one of the terminal or the mobile device is dishonest, and (3) both the terminal and device are dishonest but they do not collude or communicate. We give two protocols and investigate them under the abovementioned models. We point out a form of replay attack that renders some other straightforward implementations cumbersome to use. To enhance user-friendliness, we propose a solution using visual cues embedded into the 2D barcodes and incorporate the framework of "augmented reality" for easy verifications through visual inspection. We give a proof-of-concept implementation to show that our scheme is feasible in practice.
|
Similar to our scheme, @cite_24 suggested a technique to embed designs into barcodes to increase the expressiveness and to bring visually meaning to them. These systems recognize the barcodes based on the topology, rather than geometry, of the codes @cite_22 , and were initially developed for tracking objects in tangible user interfaces and augmented reality applications @cite_12 . Augmented reality has been exploited to enhance user experience on many applications including education @cite_1 , gaming @cite_16 , outdoor activities @cite_3 . @cite_17 Using 2D barcodes as the visual tags in the augmented reality environment, where a camera can capture the barcode on physical object and link them to their information.
|
{
"cite_N": [
"@cite_22",
"@cite_1",
"@cite_3",
"@cite_24",
"@cite_16",
"@cite_12",
"@cite_17"
],
"mid": [
"",
"1993806858",
"2158705726",
"2167807287",
"2119300480",
"2101968326",
"2123068464"
],
"abstract": [
"",
"The form factors of handheld computers make them increasingly popular among K-12 educators. Although some compelling examples of educational software for handhelds exist, we believe that the potential of this platform are just being discovered. This paper reviews innovative applications for mobile computing for both education and entertainment purposes, and then proposes a framework for approaching handheld applications we call “augmented reality educational gaming.” We then describe our development process in creating a development platform for augmented reality games that draws from rapid prototyping, learner-centered software, and contemporary game design methodologies. We provide a narrative case study of our development activities spread across five case studies with classrooms, and provide a design narrative explaining this development process and articulate an approach to designing educational software on emerging technology platforms. Pedagogical, design, and technical conclusions and implications are discussed.",
"We have built an outdoors augmented reality system for mobile phones that matches camera-phone images against a large database of location-tagged images using a robust image retrieval algorithm. We avoid network latency by implementing the algorithm on the phone and deliver excellent performance by adapting a state-of-the-art image retrieval algorithm based on robust local descriptors. Matching is performed against a database of highly relevant features, which is continuously updated to reflect changes in the environment. We achieve fast updates and scalability by pruning of irrelevant features based on proximity to the user. By compressing and incrementally updating the features stored on the phone we make the system amenable to low-bandwidth wireless connections. We demonstrate system robustness on a dataset of location-tagged images and show a smart-phone implementation that achieves a high image matching rate while operating in near real-time.",
"Visual markers are graphic symbols designed to be easily recognised by machines. They are traditionally used to track goods, but there is increasing interest in their application to mobile HCI. By scanning a visual marker through a camera phone users can retrieve localised information and access mobile services. One missed opportunity in current visual marker systems is that the markers themselves cannot be visually designed, they are not expressive to humans, and thus fail to convey information before being scanned. This paper provides an overview of d-touch, an open source system that allows users to create their own markers, controlling their aesthetic qualities. The system runs in real-time on mobile phones and desktop computers. To increase computational efficiency d-touch imposes constraints on the design of the markers in terms of the relationship of dark and light regions in the symbols. We report a user study in which pairs of novice users generated between 3 and 27 valid and expressive markers within one hour of being introduced to the system, demonstrating its flexibility and ease of use.",
"While the knowledge economy has reshaped the world, schools lag behind in producing appropriate learning for this social change. Science education needs to prepare students for a future world in which multiple representations are the norm and adults are required to “think like scientists.” Location-based augmented reality games offer an opportunity to create a “post-progressive” pedagogy in which students are not only immersed in authentic scientific inquiry, but also required to perform in adult scientific discourses. This cross-case comparison as a component of a design-based research study investigates three cases (roughly 28 students total) where an Augmented Reality curriculum, Mad City Mystery, was used to support learning in environmental science. We investigate whether augmented reality games on handhelds can be used to engage students in scientific thinking (particularly argumentation), how game structures affect students’ thinking, the impact of role playing on learning, and the role of the physical environment in shaping learning. We argue that such games hold potential for engaging students in meaningful scientific argumentation. Through game play, players are required to develop narrative accounts of scientific phenomena, a process that requires them to develop and argue scientific explanations. We argue that specific game features scaffold this thinking process, creating supports for student thinking non-existent in most inquiry-based learning environments.",
"\"Audio d-touch\" uses a consumer-grade web camera and customizable block objects to provide an interactive tangible interface for a variety of time based musical tasks such as sequencing, drum editing and collaborative composition. Three instruments are presented here. Future applications of the interface are also considered.",
"The CyberCode is a visual tagging system based on a 2D-barcode technology and provides several features not provided by other tagging systems. CyberCode tags can be recognized by the low-cost CMOS or CCD cameras found in more and more mobile devices, and it can also be used to determine the 3D position of the tagged object as well as its ID number. This paper describes examples of augmented reality applications based on CyberCode, and discusses some key characteristics of tagging technologies that must be taken into account when designing augmented reality environments."
]
}
|
1002.3684
|
2136424517
|
Independent component analysis (ICA) aims at decomposing an observed random vector into statistically independent variables. Deflation-based implementations, such as the popular one-unit FastICA algorithm and its variants, extract the independent components one after another. A novel method for deflationary ICA, referred to as RobustICA, is put forward in this paper. This simple technique consists of performing exact line search optimization of the kurtosis contrast function. The step size leading to the global maximum of the contrast along the search direction is found among the roots of a fourth-degree polynomial. This polynomial rooting can be performed algebraically, and thus at low cost, at each iteration. Among other practical benefits, RobustICA can avoid prewhitening and deals with real- and complex-valued mixtures of possibly noncircular sources alike. The absence of prewhitening improves asymptotic performance. The algorithm is robust to local extrema and shows a very high convergence speed in terms of the computational cost required to reach a given source extraction quality, particularly for short data records. These features are demonstrated by a comparative numerical analysis on synthetic data. RobustICA's capabilities in processing real-world data involving noncircular complex strongly super-Gaussian sources are illustrated by the biomedical problem of atrial activity (AA) extraction in atrial fibrillation (AF) electrocardiograms (ECGs), where it outperforms an alternative ICA-based technique.
|
Amari @cite_15 @cite_56 puts forward adaptive rules for learning the step size in neural algorithms for BSS ICA, more pertinent in the context of the present work. The idea is to make the step size depend on the gradient norm, in order to obtain a fast evolution at the beginning of the iterations and then a decreasing misadjustment as a stationary point is reached. These step-size learning rules, in turn, include other learning coefficients which must be set appropriately. Although the resulting algorithms are said to be robust to the choice of these coefficients, their optimal selection remains application dependent. Other guidelines for choosing the step size in natural gradient algorithms are given in @cite_60 , but are merely based on local stability conditions. In a non-linear mixing setup, Khor and co-workers put forward a fuzzy logic approach to control the learning rate of a separation algorithm based on the natural gradient @cite_13 .
|
{
"cite_N": [
"@cite_60",
"@cite_15",
"@cite_13",
"@cite_56"
],
"mid": [
"2158035059",
"1970789124",
"2088551704",
""
],
"abstract": [
"This paper reports a study on the problem of the blind simultaneous extraction of specific groups of independent components from a linear mixture. This paper first presents a general overview and unification of several information theoretic criteria for the extraction of a single independent component. Then, our contribution fills the theoretical gap that exists between extraction and separation by presenting tools that extend these criteria to allow the simultaneous blind extraction of subsets with an arbitrary number of independent components. In addition, we analyze a family of learning algorithms based on Stiefel manifolds and the natural gradient ascent, present the nonlinear optimal activations (score) functions, and provide new or extended local stability conditions. Finally, we illustrate the performance and features of the proposed approach by computer-simulation experiments.",
"When a parameter space has a certain underlying structure, the ordinary gradient of a function does not represent its steepest direction, but the natural gradient does. Information geometry is used for calculating the natural gradients in the parameter space of perceptrons, the space of matrices (for blind source separation), and the space of linear dynamical systems (for blind source deconvolution). The dynamical behavior of natural gradient online learning is analyzed and is proved to be Fisher efficient, implying that it has asymptotically the same performance as the optimal batch estimation of parameters. This suggests that the plateau phenomenon, which appears in the backpropagation learning algorithm of multilayer perceptrons, might disappear or might not be so serious when the natural gradient is used. An adaptive method of updating the learning rate is proposed and analyzed.",
"This paper proposes a new nonlinear blind source separation algorithm with hybridisation of fuzzy logic based learning rate control and simulated annealing to improve the global solution search. Benefits of fuzzy systems and simulated annealing are incorporated into a multilayer perceptron network. Fuzzy logic control allows adjustments of learning rate to enhance the rate of convergence of the algorithm. Simulated annealing is implemented to avoid the algorithm becoming trapped in local minima. A simple and computationally efficient method for controlling learning rate and ensuring a global solution is proposed. The performance of the proposed algorithm in terms of convergence of entropy, is studied alongside other techniques of learning rate adaptation. Simulations show that the proposed nonlinear algorithm outperforms other existing nonlinear algorithms based on fixed learning rates.",
""
]
}
|
1002.4228
|
2168975319
|
An isogeny between elliptic curves is an algebraic morphism which is a group homomorphism. Many applications in cryptography require evaluating large degree isogenies between elliptic curves efficiently. For ordinary curves of the same endomorphism ring, the previous best known algorithm has a worst case running time which is exponential in the length of the input. In this paper we show this problem can be solved in subexponential time under reasonable heuristics. Our approach is based on factoring the ideal corresponding to the kernel of the isogeny, modulo principal ideals, into a product of smaller prime ideals for which the isogenies can be computed directly. Combined with previous work of , our algorithm yields equations for large degree isogenies in quasi-optimal time given only the starting curve and the kernel.
|
Bisson and Sutherland @cite_6 have developed an algorithm to compute the endomorphism ring of an elliptic curve in subexponential time, using relation-finding techniques which largely overlap with ours. Although our main results were obtained independently, we have incorporated their ideas into our algorithm in several places, resulting in a simpler presentation as well as a large speedup compared to the original version of our work. Given two elliptic curves @math and @math over @math admitting a normalized isogeny @math of degree @math , the equation of @math as a rational function contains @math coefficients. @cite_3 have published an algorithm which produces this equation, given @math , @math , and @math . Their algorithm has running time @math , which is quasi-optimal given the size of the output. Using our algorithm, it is possible to compute @math from @math and @math in time @math for large @math . Hence the combination of the two algorithms can produce the equation of @math within a quasi-optimal running time of @math , given only @math and @math (or @math and @math ), without the need to provide @math in the input.
|
{
"cite_N": [
"@cite_3",
"@cite_6"
],
"mid": [
"2048276889",
"2081640523"
],
"abstract": [
"We survey algorithms for computing isogenies between elliptic curves defined over a field of characteristic either 0 or a large prime. We introduce a new algorithm that computes an isogeny of degree l (l different from the characteristic) in time quasi-linear with respect to l. This is based in particular on fast algorithms for power series expansion of the Weierstrass ℘-function and related functions.",
"Abstract We present two algorithms to compute the endomorphism ring of an ordinary elliptic curve E defined over a finite field F q . Under suitable heuristic assumptions, both have subexponential complexity. We bound the complexity of the first algorithm in terms of log q , while our bound for the second algorithm depends primarily on log | D E | , where D E is the discriminant of the order isomorphic to End ( E ) . As a byproduct, our method yields a short certificate that may be used to verify that the endomorphism ring is as claimed."
]
}
|
1002.2436
|
2180630352
|
The Leftover Hash Lemma states that the output of a two-universal hash function applied to an input with sufficiently high entropy is almost uniformly random. In its standard formulation, the lemma refers to a notion of randomness that is (usually implicitly) defined with respect to classical side information. Here, a strictly more general version of the Leftover Hash Lemma that is valid even if side information is represented by the state of a quantum system is shown. Our result applies to almost two-universal families of hash functions. The generalized Leftover Hash Lemma has applications in cryptography, e.g., for key agreement in the presence of an adversary who is not restricted to classical information processing.
|
Quantum versions of the Leftover Hash Lemma @cite_0 for two-universal families of hash functions have been used in the context of privacy amplification against a quantum adversary @cite_5 @cite_14 . This application has gained prominence with the rise of quantum cryptography and quantum key distribution in particular. There, the side information @math is gathered during a key agreement process between two parties by an eavesdropper who is not necessarily limited to classical information processing. The quantum generalization of the Leftover Hash Lemma is then used to bound the amount of secret key that can be distilled by the two parties.
|
{
"cite_N": [
"@cite_0",
"@cite_5",
"@cite_14"
],
"mid": [
"",
"2114805880",
"2099340097"
],
"abstract": [
"",
"Privacy amplification is the art of shrinking a partially secret string Z to a highly secret key S. We show that, even if an adversary holds quantum information about the initial string Z, the key S obtained by two-universal hashing is secure, according to a universally composable security definition. Additionally, we give an asymptotically optimal lower bound on the length of the extractable key S in terms of the adversary's (quantum) knowledge about Z. Our result has applications in quantum cryptography. In particular, it implies that many of the known quantum key distribution protocols are universally composable.",
"Let X1,..., Xn be a sequence of n classical random variables and consider a sample Xs1,..., Xsr of r ≤ n positions selected at random. Then, except with (exponentially in r) small probability, the min-entropy Hmin(Xs1 ...Xsr) of the sample is not smaller than, roughly, a fraction r n of the overall entropy Hmin(X1 ...Xn), which is optimal. Here, we show that this statement, originally proved in [S. Vadhan, LNCS 2729, Springer, 2003] for the purely classical case, is still true if the min-entropy Hmin is measured relative to a quantum system. Because min-entropy quantifies the amount of randomness that can be extracted from a given random variable, our result can be used to prove the soundness of locally computable extractors in a context where side information might be quantum-mechanical. In particular, it implies that key agreement in the bounded-storage model-using a standard sample-and-hash protocol-is fully secure against quantum adversaries, thus solving a long-standing open problem."
]
}
|
1002.2436
|
2180630352
|
The Leftover Hash Lemma states that the output of a two-universal hash function applied to an input with sufficiently high entropy is almost uniformly random. In its standard formulation, the lemma refers to a notion of randomness that is (usually implicitly) defined with respect to classical side information. Here, a strictly more general version of the Leftover Hash Lemma that is valid even if side information is represented by the state of a quantum system is shown. Our result applies to almost two-universal families of hash functions. The generalized Leftover Hash Lemma has applications in cryptography, e.g., for key agreement in the presence of an adversary who is not restricted to classical information processing.
|
Recently, the problem of randomness extraction with quantum side information has generated renewed interest. It has been shown that the classical technique @cite_20 of XORing a classical source about which an adversary holds quantum information with a @math -biased mask results in a uniformly distributed string @cite_4 See also @cite_2 for a generalization of this work to the fully quantum setting. .
|
{
"cite_N": [
"@cite_4",
"@cite_20",
"@cite_2"
],
"mid": [
"1614939177",
"2170255835",
"2143940675"
],
"abstract": [
"Randomness extraction is of fundamental importance for information-theoretic cryptography. It allows to transform a raw key about which an attacker has some limited knowledge into a fully secure random key, on which the attacker has essentially no information. Up to date, only very few randomness-extraction techniques are known to work against an attacker holding quantum information on the raw key. This is very much in contrast to the classical (non-quantum) setting, which is much better understood and for which a vast amount of different techniques are known and proven to work. We prove a new randomness-extraction technique, which is known to work in the classical setting, to be secure against a quantum attacker as well. Randomness extraction is done by xor'ing a so-called δ-biased mask to the raw key. Our result allows to extend the classical applications of this extractor to the quantum setting. We discuss the following two applications. We show how to encrypt a long message with a short key, information-theoretically secure against a quantum attacker, provided that the attacker has enough quantum uncertainty on the message. This generalizes the concept of entropically-secure encryption to the case of a quantum attacker. As second application, we show how to do errorcorrection without leaking partial information to a quantum attacker. Such a technique is useful in settings where the raw key may contain errors, since standard error-correction techniques may provide the attacker with information on, say, a secret key that was used to obtain the raw key.",
"This paper explores what kinds of information two parties must communicate in order to correct errors which occur in a shared secret string W. Any bits they communicate must leak a significant amount of information about W --- that is, from the adversary's point of view, the entropy of W will drop significantly. Nevertheless, we construct schemes with which Alice and Bob can prevent an adversary from learning any useful information about W. Specifically, if the entropy of W is sufficiently high, then there is no function f(W) which the adversary can learn from the error-correction information with significant probability.This leads to several new results: (a) the design of noise-tolerant \"perfectly one-way\" hash functions in the sense of [7], which in turn leads to obfuscation of proximity queries for high entropy secrets W; (b) private fuzzy extractors [11], which allow one to extract uniformly random bits from noisy and nonuniform data W, while also insuring that no sensitive information about W is leaked; and (c) noise tolerance and stateless key re-use in the Bounded Storage Model, resolving the main open problem of Ding [10].The heart of our constructions is the design of strong randomness extractors with the property that the source W can be recovered from the extracted randomness and any string W' which is close to W.",
"An encryption scheme is said to be entropically secure if an adversary whose min-entropy on the message is upper bounded cannot guess any function of the message. Similarly, an encryption scheme is entropically indistinguishable if the encrypted version of a message whose min-entropy is high enough is statistically indistinguishable from a fixed distribution. We present full generalizations of these two concepts to the encryption of quantum states in which the quantum conditional min-entropy, as introduced by Renner, is used to bound the adversary's prior information on the message. A proof of the equivalence between quantum entropic security and quantum entropic indistinguishability is presented. We also provide proofs of security for two different ciphers in this model and a proof for a lower bound on the key length required by any such cipher. These ciphers generalize existing schemes for approximate quantum encryption to the entropic security model."
]
}
|
1002.2436
|
2180630352
|
The Leftover Hash Lemma states that the output of a two-universal hash function applied to an input with sufficiently high entropy is almost uniformly random. In its standard formulation, the lemma refers to a notion of randomness that is (usually implicitly) defined with respect to classical side information. Here, a strictly more general version of the Leftover Hash Lemma that is valid even if side information is represented by the state of a quantum system is shown. Our result applies to almost two-universal families of hash functions. The generalized Leftover Hash Lemma has applications in cryptography, e.g., for key agreement in the presence of an adversary who is not restricted to classical information processing.
|
However, to achieve even shorter seed lengths, more advanced techniques such as Trevisan's @cite_10 extractor have been studied in @cite_31 @cite_17 @cite_22 . @cite_17 , it is shown that a seed of length @math is sufficient to generate a key of length @math , where @math is a measure of the size of the adversary's quantum memory. @cite_22 , the result was extended to the formalism of conditional min-entropies. They attain a key length of @math , which can be arbitrarily larger than @math . Furthermore, as we show in , this key length is almost optimal. Our result may be useful to further improve the performance of these extractors (see discussion in @cite_22 ).
|
{
"cite_N": [
"@cite_31",
"@cite_10",
"@cite_22",
"@cite_17"
],
"mid": [
"2035009291",
"1967175855",
"",
"2077372202"
],
"abstract": [
"In the classical privacy amplification problem Alice and Bob share information that is only partially secret towards an eavesdropper Charlie. Their goal is to distill this information to a shorter string that is completely secret. The classical privacy amplification problem can be solved almost optimally using extractors. An interesting variant of the problem, where the eavesdropper Charlie is allowed to keep quantum information rather than just classical information, was introduced by Konig, Maurer and Renner. In this setting, the eavesdropper Charlie may entangle himself with the input (without changing it) and the only limitation Charlie has is that it may keep at most b qubits of storage. A natural question is whether there are classical extractors that are good even against quantum storage. Recent work has shown that some classical extractors miserably fail against quantum storage. At the same time, it was shown that some other classical extractors work well even against quantum storage, but all these extractors had a large seed length that was either as large as the extractor output, or as large as the quantum storage available to the eavesdropper. In this paper we show that a modified version of Trevisan's extractor is good even against quantum storage, thereby giving the first such construction with logarithmic seed length. The technique we use is a combination of Trevisan's approach of constructing an extractor from a black-box pseudorandom generator, together with locally list-decodable codes and previous work done on quantum random access codes.",
"We introduce a new approach to constructing extractors. Extractors are algorithms that transform a “weakly random” distribution into an almost uniform distribution. Explicit constructions of extractors have a variety of important applications, and tend to be very difficult to obtain. We demonstrate an unsuspected connection between extractors and pseudorandom generators. In fact, we show that every pseudorandom generator of a certain kind is an extractor. A pseudorandom generator construction due to Impagliazzo and Wigderson, once reinterpreted via our connection, is already an extractor that beats most known constructions and solves an important open question. We also show that, using the simpler Nisan-Wigderson generator and standard error-correcting codes, one can build even better extractors with the additional advantage that both the construction and the analysis are simple and admit a short self-contained description.",
"",
"We show that Trevisan's extractor and its variants [22,19] are secure against bounded quantum storage adversaries. One instantiation gives the first such extractor to achieve an output length Θ(K-b), where K is the source's entropy and b the adversary's storage, together with a poly-logarithmic seed length. Another instantiation achieves a logarithmic key length, with a slightly smaller output length Θ((K-b) Kγ) for any γ>0. In contrast, the previous best construction [21] could only extract (K b)1 15 bits. Some of our constructions have the additional advantage that every bit of the output is a function of only a polylogarithmic number of bits from the source, which is crucial for some cryptographic applications. Our argument is based on bounds for a generalization of quantum random access codes, which we call quantum functional access codes. This is crucial as it lets us avoid the local list-decoding algorithm central to the approach in [21], which was the source of the multiplicative overhead."
]
}
|
1002.2477
|
2951386007
|
The existing literature on optimal auctions focuses on optimizing the expected revenue of the seller, and is appropriate for risk-neutral sellers. In this paper, we identify good mechanisms for risk-averse sellers. As is standard in the economics literature, we model the risk-aversion of a seller by endowing the seller with a monotone concave utility function. We then seek robust mechanisms that are approximately optimal for all sellers, no matter what their levels of risk-aversion are. We have two main results for multi-unit auctions with unit-demand bidders whose valuations are drawn i.i.d. from a regular distribution. First, we identify a posted-price mechanism called the Hedge mechanism, which gives a universal constant factor approximation; we also show for the unlimited supply case that this mechanism is in a sense the best possible. Second, we show that the VCG mechanism gives a universal constant factor approximation when the number of bidders is even only a small multiple of the number of items. Along the way we point out that Myerson's characterization of the optimal mechanisms fails to extend to utility-maximization for risk-averse sellers, and establish interesting properties of regular distributions and monotone hazard rate distributions.
|
There is some work that deals with risk in the context of auctions. Eso @cite_15 identifies an optimal mechanism for a risk-averse seller, which always provides the same revenue bid vector by modifying Myerson's optimal mechanism; unfortunately, this mechanism does not satisfy ex-post (or even ex-interim) individual rationality, and charges bidders even when they lose. Maskin and Riley @cite_7 identifies the optimal Bayesian-incentive compatible mechanism for a risk-neutral seller when the are risk-averse. In our model, we identify mechanisms that are ex-post incentive compatible. So the buyers optimize their utility bidding truthfully for every realization of the valuations, and thus have no uncertainty or risk to deal with. @cite_0 studies risk-aversion in single-item auctions. Specifically, they show for both the first and second price mechanisms that the optimal reserve price reduces as the level of risk-aversion of the seller increases. In contrast, we identify the optimal truthful mechanism for a risk-averse seller in a single-item auction in (it happens to be a second price mechanism with a reserve), study auctions of two or more items and identify mechanisms that are simultaneously approximate for all risk-averse sellers.
|
{
"cite_N": [
"@cite_0",
"@cite_15",
"@cite_7"
],
"mid": [
"2103992403",
"2054347841",
"2066809750"
],
"abstract": [
"This paper analyzes the effects of buyer and seller risk aversion in first and second-price auctions. The setting is the classic one of symmetric and independent private values, with ex ante homogeneous bidders. However, the seller is able to optimally set the reserve price. In both auctions the seller’s optimal reserve price is shown to decrease in his own risk aversion, and more so in the first-price auction. Thus, greater seller risk aversion increases the ex post efficiency of both auctions, and especially that of the first-price auction. The seller’s optimal reserve price in the first-price, but not in the second-price, auction decreases in the buyers’ risk aversion. Thus, greater buyer risk aversion also increases the ex post efficiency of the first but not the second-price auction. At the interim stage, the first-price auction is preferred by all buyer types in a lower interval, as well as by the seller.",
"Abstract We consider auctions with a risk averse seller in independent private values environments with risk neutral buyers. We show that for every incentive compatible selling mechanism there exists a mechanism which provides deterministically the same (expected) revenue.",
""
]
}
|
1002.2477
|
2951386007
|
The existing literature on optimal auctions focuses on optimizing the expected revenue of the seller, and is appropriate for risk-neutral sellers. In this paper, we identify good mechanisms for risk-averse sellers. As is standard in the economics literature, we model the risk-aversion of a seller by endowing the seller with a monotone concave utility function. We then seek robust mechanisms that are approximately optimal for all sellers, no matter what their levels of risk-aversion are. We have two main results for multi-unit auctions with unit-demand bidders whose valuations are drawn i.i.d. from a regular distribution. First, we identify a posted-price mechanism called the Hedge mechanism, which gives a universal constant factor approximation; we also show for the unlimited supply case that this mechanism is in a sense the best possible. Second, we show that the VCG mechanism gives a universal constant factor approximation when the number of bidders is even only a small multiple of the number of items. Along the way we point out that Myerson's characterization of the optimal mechanisms fails to extend to utility-maximization for risk-averse sellers, and establish interesting properties of regular distributions and monotone hazard rate distributions.
|
Finally, we mention papers that inspire our proof techniques. @cite_11 proposes posted-price mechanisms, and it uses Myerson's mechanism to guide the selection of the prices. We use a similar idea in . Bulow and Klemperer @cite_4 shows that the VCG mechanism with @math extra bidders yields better expected revenue than the optimal mechanism so long as the bidder valuations are drawn i.i.d. from a regular distribution. @cite_8 extends the result of Bulow and Klemperer @cite_4 to matroid settings, and introduces the problem of designing markets with good revenue properties. We use ideas from these papers to bound the performance of the VCG mechanism in . The characterization of regular distributions in terms of concave revenue functions is implicit in Myerson @cite_3 , and is used explicitly in @cite_11 and @cite_10 .
|
{
"cite_N": [
"@cite_4",
"@cite_8",
"@cite_3",
"@cite_10",
"@cite_11"
],
"mid": [
"2160747366",
"2078040677",
"",
"2149600441",
"1674324422"
],
"abstract": [
"Which is the more profitable way to sell a company: an auction with no reserve price or an optimally structured negotiation with one less bidder? The authors show, under reasonable assumptions, that the auction is always preferable when bidders' signals are independent. For affiliated signals, the result holds under certain restrictions on the seller's choice of negotiating mechanism. The result suggests that the value of negotiating skill is small relative to the value of additional competition. The paper also shows how the analogies between monopoly theory and auction theory can help derive new results in auction theory. Copyright 1996 by American Economic Association.",
"This paper analyzes the problem of inducing the members of an organization to behave as if they formed a team. Considered is a conglomerate-type organization consisting of a set of semi-autonomous subunits that are coordinated by the organization's head. The head's incentive problem is to choose a set of employee compensation rules that will induce his subunit managers to communicate accurate information and take optimal decisions. The main result exhibits a particular set of compensation rules, an optimal incentive structure, that leads to team behavior. Particular attention is directed to the informational aspects of the problem. An extended example of a resource allocation model is discussed and the optimal incentive structure is interpreted in terms of prices charged by the head for resources allocated to the subunits.",
"",
"We design and analyze approximately revenue-maximizing auctions in general single-parameter settings. Bidders have publicly observable attributes, and we assume that the valuations of indistinguishable bidders are independent draws from a common distribution. Crucially, we assume all valuation distributions are a priori unknown to the seller. Despite this handicap, we show how to obtain approximately optimal expected revenue - nearly as large as what could be obtained if the distributions were known in advance - under quite general conditions. Our most general result concerns arbitrary downward-closed single-parameter environments and valuation distributions that satisfy a standard hazard rate condition. We also assume that no bidder has a unique attribute value, which is obviously necessary with unknown and attribute-dependent valuation distributions. Here, we give an auction that, for every such environment and unknown valuation distributions, has expected revenue at least a constant fraction of the expected optimal welfare (and hence revenue). A key idea in our auction is to associate each bidder with another that has the same attribute, with the second bidder's valuation acting as a random reserve price for the first. Conceptually, our analysis shows that even a single sample from a distribution - the second bidder's valuation - is sufficient information to obtain near-optimal expected revenue, even in quite general settings.",
"We consider the classical mathematical economics problem of Bayesian optimal mechanism design where a principal aims to optimize expected revenue when allocating resources to self-interested agents with preferences drawn from a known distribution. In single-parameter settings (i.e., where each agent's preference is given by a single private value for being served and zero for not being served) this problem is solved [Myerson '81]. Unfortunately, these single parameter optimal mechanisms are impractical and rarely employed [Ausubel and Milgrom '06], and furthermore the underlying economic theory fails to generalize to the important, relevant, and unsolved multi-dimensional setting (i.e., where each agent's preference is given by multiple values for each of the multiple services available) [Manelli and Vincent '07]. In contrast to the theory of optimal mechanisms we develop a theory of sequential posted price mechanisms, where agents in sequence are offered take-it-or-leave-it prices. These mechanisms are approximately optimal in single-dimensional settings, and avoid many of the properties that make optimal mechanisms impractical. Furthermore, these mechanisms generalize naturally to give the first known approximations to the elusive optimal multi-dimensional mechanism design problem. In particular, we solve multi-dimensional multi-unit auction problems and generalizations to matroid feasibility constraints. The constant approximations we obtain range from 1.5 to 8. For all but one case, our posted price sequences can be computed in polynomial time."
]
}
|
1002.2897
|
2013015797
|
Constraint programming can definitely be seen as a model-driven paradigm. The users write programs for modeling problems. These programs are mapped to executable models to calculate the solutions. This paper focuses on efficient model management (definition and transformation). From this point of view, we propose to revisit the design of constraint-programming systems. A model-driven architecture is introduced to map solving-independent constraint models to solving-dependent decision models. Several important questions are examined, such as the need for a visual highlevel modeling language, and the quality of metamodeling techniques to implement the transformations. A main result is the s-COMMA platform that efficiently implements the chain from modeling to solving constraint problems
|
Solver-independence in constraint modeling languages is a recent trend. Just a few languages have been developed under this principle. One example is MiniZinc, which is mainly a subset of constructs provided by Zinc, its syntax is closely related to OPL and its solver-independent platform allows to translate models into Gecode and solver code. This model transformation is performed by a rule-based system called Cadmium @cite_26 which can be regarded as an extension of Term-Rewriting (TR) @cite_5 and Constraint Handling Rules (CHR) @cite_8 . This process also involves an intermediate model called FlatZinc, which plays a similar role than , to facilitate the translation.
|
{
"cite_N": [
"@cite_5",
"@cite_26",
"@cite_8"
],
"mid": [
"1897365221",
"1559755909",
"2079333278"
],
"abstract": [
"",
"Nonlinear constraint satisfaction or optimisation models need to be reduced to equivalent linear forms before they can be solved by (Integer) Linear Programming solvers. A choice of linearisation methods exist. There are generic linearisations and constraint-specific, user-defined linearisations. Hence a model reformulation system needs to be flexible and open to allow complex and novel linearisations to be specified. In this paper we show how the declarative model reformulation system CADMIUM can be used to effectively transform constraint problems to different linearisations, allowing easy exploration of linearisation possibilities.",
"Abstract Constraint Handling Rules (CHR) are our proposal to allow more flexibility and application-oriented customization of constraint systems. CHR are a declarative language extension especially designed for writing user-defined constraints. CHR are essentially a committed-choice language consisting of multi-headed guarded rules that rewrite constraints into simpler ones until they are solved. In this broad survey we aim at covering all aspects of CHR as they currently present themselves. Going from theory to practice, we will define syntax and semantics for CHR, introduce an important decidable property, confluence, of CHR programs and define a tight integration of CHR with constraint logic programming languages. This survey then describes implementations of the language before we review several constraint solvers – both traditional and nonstandard ones – written in the CHR language. Finally we introduce two innovative applications that benefited from using CHR."
]
}
|
1002.2897
|
2013015797
|
Constraint programming can definitely be seen as a model-driven paradigm. The users write programs for modeling problems. These programs are mapped to executable models to calculate the solutions. This paper focuses on efficient model management (definition and transformation). From this point of view, we propose to revisit the design of constraint-programming systems. A model-driven architecture is introduced to map solving-independent constraint models to solving-dependent decision models. Several important questions are examined, such as the need for a visual highlevel modeling language, and the quality of metamodeling techniques to implement the transformations. A main result is the s-COMMA platform that efficiently implements the chain from modeling to solving constraint problems
|
Essence is another solver-independent language. Its syntax is addressed to users with a background in discrete mathematics, this style makes Essence a specification language rather than a modeling language. The Essence execution platform allows to map specifications into and Minion solver @cite_12 . A model transformation system called Conjure has been developed, but the integration of solver translators is not its scope. Conjure takes as input an Essence specification and transform it to an intermediate OPL-like language called Essence'. Translators from Essence' to solver code are written by hand.
|
{
"cite_N": [
"@cite_12"
],
"mid": [
"1598380601"
],
"abstract": [
"We present Minion, a new constraint solver. Empirical results on standard benchmarks show orders of magnitude performance gains over state-of-the-art constraint toolkits. These gains increase with problem size --MINION delivers scalable constraint solving. MINION is a general-purpose constraint solver, with an expressive input language based on the common constraint modelling device of matrix models. Focussing on matrix models supports a highly-optimised implementation, exploiting the properties of modern processors. This contrasts with current constraint toolkits, which, in order to provide ever more modelling and solving options, have become progressively more complex at the cost of both performance and usability. MINION is a black box from the user point of view, deliberately providing few options. This, combined with its raw speed, makes MINION a substantial step towards Puget's 'Model and Run' constraint solving paradigm."
]
}
|
1002.3330
|
1585015170
|
Formal semantics offers a complete and rigorous definition of a language. It is important to define different semantic models for a language and different models serve different purposes. Building equivalence between different semantic models of a language strengthen its formal foundation. This paper shows the derivation of denotational semantics from operational semantics of the language cCSP. The aim is to show the correspondence between operational and trace semantics. We extract traces from operational rules and use induction over traces to show the correspondence between the two semantics of cCSP.
|
The semantic correspondence presented here is based on the technique of applying structural induction. A similar approach is also applied by S. Schneider @cite_11 , where an equivalence relation was established between the operational and denotational semantics of timed CSP @cite_0 @cite_12 . Operational rules are defined for timed CSP and then timed traces and refusals are extracted from the transition rules of a program, and it is shown that the pertinent information corresponds to the semantics obtained from the denotational semantic function. By applying structural induction over the terms of timed CSP, it was proved that the behaviour of the transition system is identical to those provided by the denotational semantics.
|
{
"cite_N": [
"@cite_0",
"@cite_12",
"@cite_11"
],
"mid": [
"1602114053",
"1566733033",
"2037230951"
],
"abstract": [
"The parallel language CSP [9], an earlier version of which was described in [7], has become a major tool for the analysis of structuring methods and proof systems involving parallelism. The significance of CSP is in the elegance by which a few simply stated constructs (e.g., sequential and parallel composition, nondeterministic choice, concealment, and recursion) lead to a language capable of expressing the full complexity of distributed computing. The difficulty in achieving satisfactory semantic models containing these constructs has been in providing an adequate treatment of nondeterminism, deadlock, and divergence. Fortunately, as a result of an evolutionay development in [S], [lo], [15], [l], [14], [2], and [4] we now have several such models. The purpose of this paper is to report the development of the first real-time models of CSP to be compatible with the properties and proof systems of the abovementioned untimed models. Our objective in this development is the construction of a timed CSP model which satisfies the following: (1) Continuous with respect to time. The time domain should consist of all nonnegative real numbers, and there should be no lower bound on the time difference between consecutive observable events from two processes operating asynchronously in parallel. (2) Realistic. A given process should engage in only finitely many events in a bounded period of time. (3) Continuous and distributive with respect to semantic operators. All semantic operators should be continuous, and all the basic operators as defined in [2], except recursion, should distribute over nondeterministic choice. (4) Verijiable design. The model should provide a basis for the definition, specification, and verification of time critical processes with an adequate treatment of nondeterminism, which assists in avoidance of deadlock and divergence.",
"",
"An operational semantics is defined for the language of timed CSP, in terms of two relations: an evolution relation, which describes when a process becomes another simply by allowing time to pass; and a timed transition relation, which describes when a process may become another by performing an action at a particular time. It is shown how the timed behaviours used as the basis for the denotational models of the language may be extracted from the operational semantics. Finally, the failures model for timed CSP is shown to be equivalent to may-testing and, thus, to trace congruence."
]
}
|
1002.3330
|
1585015170
|
Formal semantics offers a complete and rigorous definition of a language. It is important to define different semantic models for a language and different models serve different purposes. Building equivalence between different semantic models of a language strengthen its formal foundation. This paper shows the derivation of denotational semantics from operational semantics of the language cCSP. The aim is to show the correspondence between operational and trace semantics. We extract traces from operational rules and use induction over traces to show the correspondence between the two semantics of cCSP.
|
A similar problem was also investigated in @cite_4 , where a metric structure was employed to relate the operational and denotational models of a given language. In order to relate the semantic models it was proved that the two models coincide. The denotational models were extended and structural induction was applied over the terms of the language to relate the semantic models.
|
{
"cite_N": [
"@cite_4"
],
"mid": [
"1991224384"
],
"abstract": [
"Our focus is on the semantics of programming and specification languages. Over the years, different approaches to give semantics to these languages have been put forward. We restrict ourselves to the operational and the denotational approach, two main streams in the field of semantics. Two notions which play an important role in this paper are (non)determinism and (non)termination. Nondeterminism arises naturally in concurrent languages and it is a key concept in specification languages. Nontermination is usually caused by recursive constructs which are crucial in programming. The operational models are based on labelled transition systems. The definition of these systems is guided by the structure of the language. Metric spaces are an essential ingredient of our denotational models. We exploit the metric structure to model recursive constructs and to define operators on infinite entities. Furthermore, we also employ the metric structure to relate operational and denotational models for a given language. On the basis of four toy languages, we develop some general theory for defining operational and denotational semantic models and for relating them. This theory is applicable to a wide variety of languages. We start with a very simple deterministic and terminating imperative programming language. By adding the recursive while statement, we obtain a deterministic and nonterminating language. Next, we augment the language with the parallel composition resulting in a bounded nondeterministic and nonterminating language. Finally, we add some timed constructs. We obtain an unbounded nondeterministic and nonterminating specification language."
]
}
|
1002.3330
|
1585015170
|
Formal semantics offers a complete and rigorous definition of a language. It is important to define different semantic models for a language and different models serve different purposes. Building equivalence between different semantic models of a language strengthen its formal foundation. This paper shows the derivation of denotational semantics from operational semantics of the language cCSP. The aim is to show the correspondence between operational and trace semantics. We extract traces from operational rules and use induction over traces to show the correspondence between the two semantics of cCSP.
|
Other than using induction, Hoare and He @cite_10 presented the idea of unifying different programming paradigms and showed how to derive operational semantics from its denotational presentation of a sequential language. They derive algebraic laws from the denotational definition and then derive the operational semantics from the algebraic laws. Similar to our work, Huibiao @cite_14 derived denotational semantics from operational semantics for a subset of Verilog @cite_9 . However the derivation was done in a different way than our method where the authors defined transitional condition and phase semantics from the operational semantics. The denotational semantics are derived from the sequential composition of the phase semantics. The authors also derived operational semantics from denotational semantics @cite_8 .
|
{
"cite_N": [
"@cite_9",
"@cite_14",
"@cite_10",
"@cite_8"
],
"mid": [
"2107517480",
"1499638282",
"2110941259",
"2154982982"
],
"abstract": [
"The Verilog hardware description language (HDL) is widely used to model the structure and behaviour of digital systems ranging from simple hardware building blocks to complete systems. Its semantics is based an the scheduling of events and the propagation of changes. Different Verilog models of the same device are used during the design process and it is important that these be 'equivalent'; formal methods for ensuring this could be commercially significant. Unfortunately, there is very little theory available to help. This self-contained tutorial paper explains the semantics of Verilog informally and poses a number of logical and semantic problems that are intended to provoke further research. Any theory developed to support Verilog is likely to be useful for the analysis of the similar (but more complex) language VHDL.",
"In this paper operational equivalence of simple functional programs is defined, and certain basic theorems proved thereupon. These basic theorems include congruence, least fixed-point, an analogue to continuity, and fixed-point induction. We then show how any ordering on programs for which these theorems hold can be easily extended to give a fully abstract cpo for the language, giving evidence that any operational semantics with these basic theorems proven is complete with respect to a denotational semantics. Furthermore, the mathematical tools used in the paper are minimal, the techniques should be applicable to a wide class of languages, and all proofs are constructive.",
"Professional practice in a mature engineering discipline is based on relevant scientific theories, usually expressed in the language of mathematics. A mathematical theory of programming aims to provide a similar basis for specification, design and implementation of computer programs. The theory can be presented in a variety of styles, including 1. Denotational, relating a program to a specification of its observable properties and behaviour. 2. Algebraic, providing equations and inequations for comparison, transformation and optimisation of designs and programs. 3. Operational, describing individual steps of a possible mechanical implementation. This paper presents simple theories of sequential non-deterministic programming in each of these three styles; by deriving each presentation from its predecessor in a cyclic fashion, mutual consistency is assured.",
"This paper presents the derivation of an operational semantics from a denotational semantics for a subset of the widely used hardware description language Verilog. Our aim is to build equivalence between the operational and denotational semantics. We propose a discrete denotational semantic model for Verilog. A phase semantics is provided for each type of transition in order to derive the operational semantics."
]
}
|
1002.1557
|
1888086716
|
We show that the class of finite rooted binary plane trees is a Ramsey class (with respect to topological embeddings that map leaves to leaves). That is, for all such trees P,H and every natural number k there exists a tree T such that for every k-coloring of the (topological) copies of P in T there exists a (topological) copy H' of H in T such that all copies of P in H' have the same color. When the trees are represented by the so-called rooted triple relation, the result gives rise to a Ramsey class of relational structures with respect to induced substructures.
|
Milliken @cite_9 proved a result that can be considered to be a generalization of both the statement of Halpern-L "auchli and of Deuber; however, it again does not imply our result in any obvious way for the same reasons as mentioned above for Deuber's result.
|
{
"cite_N": [
"@cite_9"
],
"mid": [
"2073797527"
],
"abstract": [
"Abstract We prove a Ramsey theorem for trees. The infinite version of this theorem can be stated: if T is a rooted tree of infinite height with each node of T having at least one but finitely many immediate successors, if n is a positive integer, and if the collection of all strongly embedded, height-n subtrees of T is partitioned into finitely many classes, then there must exist a strongly embedded subtree S of T with S having infinite height and with all the strongly embedded, height-n subtrees of S in the same class."
]
}
|
1002.1557
|
1888086716
|
We show that the class of finite rooted binary plane trees is a Ramsey class (with respect to topological embeddings that map leaves to leaves). That is, for all such trees P,H and every natural number k there exists a tree T such that for every k-coloring of the (topological) copies of P in T there exists a (topological) copy H' of H in T such that all copies of P in H' have the same color. When the trees are represented by the so-called rooted triple relation, the result gives rise to a Ramsey class of relational structures with respect to induced substructures.
|
The result presented here gives rise to a new Ramsey class, and hence contributes to the classification program. We have to describe how to represent rooted binary plane trees as relational structures. Our relational structures will be ordered, i.e., the signature contains a binary relation symbol @math that is interpreted by a linear order. The tree structure is represented by a single ternary relation symbol as follows. For leaves @math of a tree, we write @math if the least common ancestor of @math and @math is below the least common ancestor of @math and @math in the tree; the relation @math is also called the , following terminology in phylogenetic reconstruction @cite_14 @cite_15 @cite_7 . It is known that a rooted binary tree is described up to isomorphism by the rooted triple relation (see e.g. @cite_14 ).
|
{
"cite_N": [
"@cite_15",
"@cite_14",
"@cite_7"
],
"mid": [
"2130775582",
"2075503802",
"2011729349"
],
"abstract": [
"Abstract We give an algorithm for finding the set of all rooted trees with labelled leaves having subtrees homeomorphic to each of a given set of rooted trees with labelled leaves. This type of problem arises in the study of evolutionary trees.",
"In taxonomy and other branches of classification it is useful to know when tree-like classifications on overlapping sets of labels can be consistently combined into a parent tree. This paper considers the computation complexity of this problem. Recognizing when a consistent parent tree exists is shown to be intractable (NP-complete) for sets of unrooted trees, even when each tree in the set classifies just four labels. Consequently determining the compatibility of qualitative characters and partial binary characters is, in general, also NP-complete. However for sets of rooted trees an algorithm is described which constructs the “strict consensus tree” of all consistent parent trees (when they exist) in polynomial time. The related question of recognizing when a set of subtrees uniquely defines a parent tree is also considered, and a simple necessary and sufficient condition is described for rooted trees.",
"We are given a set ( T ) = T 1 ,T 2 , . . .,T k of rooted binary trees, each T i leaf-labeled by a subset ( L (T_i) 1,2, . . ., n ) . If T is a tree on 1,2, . . .,n , we let ( T| L ) denote the minimal subtree of T induced by the nodes of ( L ) and all their ancestors. The consensustreeproblem asks whether there exists a tree T * such that, for every i , ( T^* | L (T_i) ) is homeomorphic to T i ."
]
}
|
1002.1557
|
1888086716
|
We show that the class of finite rooted binary plane trees is a Ramsey class (with respect to topological embeddings that map leaves to leaves). That is, for all such trees P,H and every natural number k there exists a tree T such that for every k-coloring of the (topological) copies of P in T there exists a (topological) copy H' of H in T such that all copies of P in H' have the same color. When the trees are represented by the so-called rooted triple relation, the result gives rise to a Ramsey class of relational structures with respect to induced substructures.
|
As we have mentioned in the previous subsection, any Ramsey class that is closed under taking substructures (and our Ramsey class @math is obviously closed under taking substructures) is an amalgamation class, and therefore there exists a unique countable homogeneous structure @math such that @math is exactly the class of all finite structures that embed into @math . The structure @math (i.e., the reduct of @math that only contains the rooted triple relation without an ordering on the domain) is well-known to model-theorists and in the theory of infinite permutation groups, and also has many explicit constructions; see e.g. @cite_4 @cite_5 . Its automorphism group is oligomorphic, 2-transitive, and 3-set-transitive, but not 3-transitive. The rooted triple relation @math is a @math -relation in the terminology of @cite_5 .
|
{
"cite_N": [
"@cite_5",
"@cite_4"
],
"mid": [
"2017155572",
"417925666"
],
"abstract": [
"Preparation Semilinear order relations Abstract chain sets General betweenness relations Abstract direction sets Applications and commentary References.",
"1. Introduction 2. Preliminaries 3. Examples and growth rates 4. Subgroups 5. Miscellaneous topics."
]
}
|
1002.1843
|
2952550098
|
This paper defines the Arrwwid number of a recursive tiling (or space-filling curve) as the smallest number w such that any ball Q can be covered by w tiles (or curve sections) with total volume O(vol(Q)). Recursive tilings and space-filling curves with low Arrwwid numbers can be applied to optimise disk, memory or server access patterns when processing sets of points in d-dimensional space. This paper presents recursive tilings and space-filling curves with optimal Arrwwid numbers. For d >= 3, we see that regular cube tilings and space-filling curves cannot have optimal Arrwwid number, and we see how to construct alternatives with better Arrwwid numbers.
|
Jagadish, Kumar and studied how well space-filling curves succeed in keeping the number of fragments needed to cover a query range low @cite_3 @cite_8 @cite_2 @cite_20 . The curve quality measures in their work are based on the number of fragments needed to cover a query range, averaged over a selection of query ranges that depends on the underlying tiling. As a result their results can only be used to analyse space-filling curves with the same underlying tiling; in particular they assume a tiling that subdivides squares into smaller squares, expanded down to a fixed level of recursion. This class of curves includes well-known curves such as the Hilbert curve @cite_15 and Z-order, also known as Morton order or Lebesgue order @cite_16 . However, it does not include, for example, Peano's curve @cite_11 , which is based on subdividing squares into squares and seems to be the curve of choice in certain applications @cite_9 @cite_1 . The work by Jagadish, Kumar and does not enable a comparison between, for example, Hilbert's curve and Peano's curve.
|
{
"cite_N": [
"@cite_8",
"@cite_9",
"@cite_1",
"@cite_3",
"@cite_2",
"@cite_15",
"@cite_16",
"@cite_20",
"@cite_11"
],
"mid": [
"2063378448",
"2150486346",
"",
"2015190123",
"1493714695",
"2072605585",
"",
"",
"2040879131"
],
"abstract": [
"Abstract We give closed-form expressions for the average number of runs to cover an arbitrary square region in two dimensions using a Hilbert curve. The practical use of the derived formula is that it allows the estimation of the quality of the linearization obtained by using the Hilbert curve to map from a two-dimensional space to a one-dimensional space. Hilbert curves are used extensively as a basis for multi-dimensional indexing structures, and for declustering multi-dimensional data.",
"One of the keys to tap the full performance potential of current hardware is the optimal utilization of cache memory. Cache oblivious algorithms are designed to inherently benefit from any underlying hierarchy of caches, but do not need to know about the exact structure of the cache. In this paper, we present a cache oblivious algorithm for matrix multiplication. The algorithm uses a block recursive structure and an element ordering that is based on Peano curves. In the resulting code, index jumps can be totally avoided, which leads to an asymptotically optimal spatial and temporal locality of the data access.",
"",
"There is often a need to map a multi-dimensional space on to a one-dimensional space. For example, this kind of mapping has been proposed to permit the use of one-dimensional indexing techniques to a multi-dimensional index space such as in a spatial database. This kind of mapping is also of value in assigning physical storage, such as assigning buckets to records that have been indexed on multiple attributes, to minimize the disk access effort. In this paper, we discuss what the desired properties of such a mapping are, and evaluate, through analysis and simulation, several mappings that have been proposed in the past. We present a mapping based on Hilbert's space-filling curve, which out-performs previously proposed mappings on average over a variety of different operating conditions.",
"This paper proposes a new interleaving-based method for spatial clustering developed by combining gray codes with a new ordering technique called nu-ordering. This method is compared with two existing interleaving-based techniques and another technique called hilbert-ordering. The performance comparison is done by means of a simulation study. The results show that the choice of the technique affects the clustering performance dramatically. Among the four techniques considered, both nu-ordering and hilbert method outperformed the other two methods by more than 35 ; however, with minor exceptions, the hilbert technique was the best among the four techniques evaluated.",
"",
"",
"",
"Dans cette Note on determine deux fonctions x et y, uniformes et continues d’une variable (reelle) t, qui, lorsque t varie dans l’intervalle (0, 1), prennent toutes les couples de valeurs telles que 0≤x≤1, 0≤y≤1. Si l’on appelle, suivant l’usage, courbe continue le lieu des points dont les coordonnees sont des fonctions continues d’une variable, on a ainsi un arc de courbe qui passe par tous les points d’un carre. Donc, etant donne un arc de courbe continue, sans faire d’autres hypotheses, il n’est pas toujours possible de le renfermer dans une aire arbitrairement petite."
]
}
|
1002.1843
|
2952550098
|
This paper defines the Arrwwid number of a recursive tiling (or space-filling curve) as the smallest number w such that any ball Q can be covered by w tiles (or curve sections) with total volume O(vol(Q)). Recursive tilings and space-filling curves with low Arrwwid numbers can be applied to optimise disk, memory or server access patterns when processing sets of points in d-dimensional space. This paper presents recursive tilings and space-filling curves with optimal Arrwwid numbers. For d >= 3, we see that regular cube tilings and space-filling curves cannot have optimal Arrwwid number, and we see how to construct alternatives with better Arrwwid numbers.
|
The Arrwwid number, as defined above, does not have this limitation: it admits a comparison between curves with different underlying tilings. Nevertheless , too, only studied curves based on the tiling with four squares per square @cite_7 . The Arrwwid number of such a tiling is four. studied what can be achieved by controlling the order in which the tiles are stored: they presented an ordering scheme, the space-filling curve, that guarantees that whenever four tiles are needed, at least two of them are consecutive on disk. Thus these four tiles can be divided into at most three sets such that the tiles within each set are consecutive, and thus the Arrwwid number of the space-filling curve is three. also proved that one cannot do better: no ordering scheme of this particular tiling has Arrwwid number less than three.
|
{
"cite_N": [
"@cite_7"
],
"mid": [
"2721204918"
],
"abstract": [
"We are given a two-dimensional square grid of size N×N, where N∶=2n and n≥0. A space filling curve (SFC) is a numbering of the cells of this grid with numbers from c+1 to c+N2, for some c≥0. We call a SFC recursive (RSFC) if it can be recursively divided into four square RSFCs of equal size. Examples of well-known RSFCs include the Hilbert curve, the z-curve, and the Gray code."
]
}
|
1002.1994
|
1614760033
|
We assume data independently sampled from a mixture distribution on the unit ball of the D-dimensional Euclidean space with K+1 components: the first component is a uniform distribution on that ball representing outliers and the other K components are uniform distributions along K d-dimensional linear subspaces restricted to that ball. We study both the simultaneous recovery of all K underlying subspaces and the recovery of the best l0 subspace (i.e., with largest number of points) by minimizing the lp-averaged distances of data points from d-dimensional subspaces of the D-dimensional space. Unlike other lp minimization problems, this minimization is non-convex for all p>0 and thus requires different methods for its analysis. We show that if 0 1 and p>1, then we show that both all underlying subspaces and the best l0 subspace cannot be recovered and even nearly recovered. Further relaxations are also discussed. We use the results of this paper for partially justifying recent effective algorithms for modeling data by mixtures of multiple subspaces as well as for discussing the effect of using variants of lp minimizations in RANSAC-type strategies for single subspace recovery.
|
Basis pursuit @cite_35 uses @math minimization to search for the sparsest solutions (i.e., solutions minimizing the @math norm) of an undercomplete system of linear equations. It is used for decomposing a signal as a linear combination of few representative elements from a large and redundant dictionary of functions. In this application one often preprocesses the data by normalizing the columns of the underlying matrix by their @math norm. Donoho and Elad @cite_53 have shown that sufficiently sparse'' solutions can be completely recovered by minimizing the @math norm instead of the @math norm. However, this result restricts the size of the mutual incoherence @math of the dictionary and consequently the size of the sparse solution (which is inversely controlled by @math ). Other works @cite_41 @cite_5 @cite_21 @cite_30 show that for the overwhelming majority of matrices representing undercomplete systems, the minimal @math solution of each system coincides with the sparsest one as long as the solution is sufficiently sparse. Moreover, this fact holds even when noise is added to the decomposed signal (with a slight modification of the problem formulation).
|
{
"cite_N": [
"@cite_30",
"@cite_35",
"@cite_41",
"@cite_53",
"@cite_21",
"@cite_5"
],
"mid": [
"2164452299",
"1986931325",
"2145096794",
"2154332973",
"2114147096",
"2050834445"
],
"abstract": [
"Suppose we wish to recover a vector x0 ∈ R m (e.g., a digital signal or image) from incomplete and contaminated observations y = Ax0 + e; A is an n × m",
"The time-frequency and time-scale communities have recently developed a large number of overcomplete waveform dictionaries --- stationary wavelets, wavelet packets, cosine packets, chirplets, and warplets, to name a few. Decomposition into overcomplete systems is not unique, and several methods for decomposition have been proposed, including the method of frames (MOF), Matching pursuit (MP), and, for special dictionaries, the best orthogonal basis (BOB). Basis Pursuit (BP) is a principle for decomposing a signal into an \"optimal\" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions. We give examples exhibiting several advantages over MOF, MP, and BOB, including better sparsity and superresolution. BP has interesting relations to ideas in areas as diverse as ill-posed problems, in abstract harmonic analysis, total variation denoising, and multiscale edge denoising. BP in highly overcomplete dictionaries leads to large-scale optimization problems. With signals of length 8192 and a wavelet packet dictionary, one gets an equivalent linear program of size 8192 by 212,992. Such problems can be attacked successfully only because of recent advances in linear programming by interior-point methods. We obtain reasonable success with a primal-dual logarithmic barrier method and conjugate-gradient solver.",
"This paper considers the model problem of reconstructing an object from incomplete frequency samples. Consider a discrete-time signal f spl isin C sup N and a randomly chosen set of frequencies spl Omega . Is it possible to reconstruct f from the partial knowledge of its Fourier coefficients on the set spl Omega ? A typical result of this paper is as follows. Suppose that f is a superposition of |T| spikes f(t)= spl sigma sub spl tau spl isin T f( spl tau ) spl delta (t- spl tau ) obeying |T| spl les C sub M spl middot (log N) sup -1 spl middot | spl Omega | for some constant C sub M >0. We do not know the locations of the spikes nor their amplitudes. Then with probability at least 1-O(N sup -M ), f can be reconstructed exactly as the solution to the spl lscr sub 1 minimization problem. In short, exact recovery may be obtained by solving a convex optimization problem. We give numerical values for C sub M which depend on the desired probability of success. Our result may be interpreted as a novel kind of nonlinear sampling theorem. In effect, it says that any signal made out of |T| spikes may be recovered by convex programming from almost every set of frequencies of size O(|T| spl middot logN). Moreover, this is nearly optimal in the sense that any method succeeding with probability 1-O(N sup -M ) would in general require a number of frequency samples at least proportional to |T| spl middot logN. The methodology extends to a variety of other situations and higher dimensions. For example, we show how one can reconstruct a piecewise constant (one- or two-dimensional) object from incomplete frequency samples - provided that the number of jumps (discontinuities) obeys the condition above - by minimizing other convex functionals such as the total variation of f.",
"Given a dictionary D = dk of vectors dk, we seek to represent a signal S as a linear combination S = ∑k γ(k)dk, with scalar coefficients γ(k). In particular, we aim for the sparsest representation possible. In general, this requires a combinatorial optimization process. Previous work considered the special case where D is an overcomplete system consisting of exactly two orthobases and has shown that, under a condition of mutual incoherence of the two bases, and assuming that S has a sufficiently sparse representation, this representation is unique and can be found by solving a convex optimization problem: specifically, minimizing the l1 norm of the coefficients γ. In this article, we obtain parallel results in a more general setting, where the dictionary D can arise from two or several bases, frames, or even less structured systems. We sketch three applications: separating linear features from planar ones in 3D data, noncooperative multiuser encoding, and identification of over-complete independent component models.",
"We consider inexact linear equations y ≈ Φx where y is a given vector in R n , Φ is a given n x m matrix, and we wish to find x 0,∈ as sparse as possible while obeying ∥y - Φx 0,∈ ∥ 2 ≤ ∈. In general, this requires combinatorial optimization and so is considered intractable. On the other hand, the l 1 -minimization problem min ∥x∥ 1 subject to ∥y - Φx∥ 2 ≤ e is convex and is considered tractable. We show that for most Φ, if the optimally sparse approximation x 0,∈ is sufficiently sparse, then the solution x 1,∈ of the l 1 -minimization problem is a good approximation to x 0,∈ . We suppose that the columns of Φ are normalized to the unit l 2 -norm, and we place uniform measure on such Φ. We study the underdetermined case where m ∼ τn and τ > 1, and prove the existence of p = p(r) > 0 and C = C(p, τ) so that for large n and for all Φ's except a negligible fraction, the following approximate sparse solution property of Φ holds: for every y having an approximation ∥y - Φx 0 ∥ 2 ≤ ∈ by a coefficient vector x 0 e R m with fewer than ρ · n nonzeros, ∥x 1,∈ - x 0 ∥ 2 ≤ C ≤ ∈. This has two implications. First, for most Φ, whenever the combinatorial optimization result x 0,∈ would be very sparse, x 1,∈ is a good approximation to x 0,∈ . Second, suppose we are given noisy data obeying y = Φx 0 + z where the unknown x 0 is known to be sparse and the noise ∥z∥ 2 ≤ ∈. For most Φ, noise-tolerant l 1 -minimization will stably recover x 0 from y in the presence of noise z. We also study the barely determined case m = n and reach parallel conclusions by slightly different arguments. Proof techniques include the use of almost-spherical sections in Banach space theory and concentration of measure for eigenvalues of random matrices.",
"We consider linear equations y = Φx where y is a given vector in ℝn and Φ is a given n × m matrix with n 0 so that for large n and for all Φ's except a negligible fraction, the following property holds: For every y having a representation y = Φx0by a coefficient vector x0 ∈ ℝmwith fewer than ρ · n nonzeros, the solution x1of the 1-minimization problem is unique and equal to x0. In contrast, heuristic attempts to sparsely solve such systems—greedy algorithms and thresholding—perform poorly in this challenging setting. The techniques include the use of random proportional embeddings and almost-spherical sections in Banach space theory, and deviation bounds for the eigenvalues of random Wishart matrices. © 2006 Wiley Periodicals, Inc."
]
}
|
1002.1994
|
1614760033
|
We assume data independently sampled from a mixture distribution on the unit ball of the D-dimensional Euclidean space with K+1 components: the first component is a uniform distribution on that ball representing outliers and the other K components are uniform distributions along K d-dimensional linear subspaces restricted to that ball. We study both the simultaneous recovery of all K underlying subspaces and the recovery of the best l0 subspace (i.e., with largest number of points) by minimizing the lp-averaged distances of data points from d-dimensional subspaces of the D-dimensional space. Unlike other lp minimization problems, this minimization is non-convex for all p>0 and thus requires different methods for its analysis. We show that if 0 1 and p>1, then we show that both all underlying subspaces and the best l0 subspace cannot be recovered and even nearly recovered. Further relaxations are also discussed. We use the results of this paper for partially justifying recent effective algorithms for modeling data by mixtures of multiple subspaces as well as for discussing the effect of using variants of lp minimizations in RANSAC-type strategies for single subspace recovery.
|
Despite the many HLM algorithms and strategies for robustness to outliers, there has been little investigations into performance guarantees of such algorithms. Accuracy of segmentation of HLM algorithms under some sampling assumptions is only analyzed in @cite_39 and @cite_40 , whereas tolerance to outliers of an HLM algorithm under some sampling assumptions is only analyzed in @cite_40 (in fact, @cite_40 analyzes the more general problem of modeling data by multiple manifolds, though it assumes an asymptotically zero noise level, unlike @cite_39 ).
|
{
"cite_N": [
"@cite_40",
"@cite_39"
],
"mid": [
"2014739666",
"2141998202"
],
"abstract": [
"In the context of clustering, we assume a generative model where each cluster is the result of sampling points in the neighborhood of an embedded smooth surface; the sample may be contaminated with outliers, which are modeled as points sampled in space away from the clusters. We consider a prototype for a higher-order spectral clustering method based on the residual from a local linear approximation. We obtain theoretical guarantees for this algorithm and show that, in terms of both separation and robustness to outliers, it outperforms the standard spectral clustering algorithm (based on pairwise distances) of Ng, Jordan and Weiss (NIPS '01). The optimal choice for some of the tuning parameters depends on the dimension and thickness of the clusters. We provide estimators that come close enough for our theoretical purposes. We also discuss the cases of clusters of mixed dimensions and of clusters that are generated from smoother surfaces. In our experiments, this algorithm is shown to outperform pairwise spectral clustering on both simulated and real data.",
"The problem of Hybrid Linear Modeling (HLM) is to model and segment data using a mixture of affine subspaces. Different strategies have been proposed to solve this problem, however, rigorous analysis justifying their performance is missing. This paper suggests the Theoretical Spectral Curvature Clustering (TSCC) algorithm for solving the HLM problem and provides careful analysis to justify it. The TSCC algorithm is practically a combination of Govindu’s multi-way spectral clustering framework (CVPR 2005) and ’s spectral clustering algorithm (NIPS 2001). The main result of this paper states that if the given data is sampled from a mixture of distributions concentrated around affine subspaces, then with high sampling probability the TSCC algorithm segments well the different underlying clusters. The goodness of clustering depends on the within-cluster errors, the between-clusters interaction, and a tuning parameter applied by TSCC. The proof also provides new insights for the analysis of (NIPS 2001)."
]
}
|
1002.0123
|
1660338801
|
In a bi-directional relay channel, a pair of nodes wish to exc hange independent messages over a shared wireless half-duplex channel with the help of relays. Recent work has mostly considered information theoretic limits of the bi-directional relay channel with t wo terminal nodes (or end users) and one relay. In this work we consider bi-directional relaying with one base station, multiple terminal nodes and one relay, all of which operate in half-duplex modes. We assume that each terminal node communicates with the base-station in a bi-directional fashion through t he relay and do not place any restrictions on the channels between the users, relays and base-stations; t hat is, each node has a direct link with every other node. Our contributions are three-fold: 1) the introd uction of four new temporal protocols which fully exploit the two-way nature of the data and outperform simple routing or multi-hop communication schemes by carefully combining network coding, random binning and user cooperation which exploit over-heard and own-message side information, 2) derivations of inner and outer bounds on the capacity region of the discrete-memoryless multi-pair two-way network, and 3) a numerical evaluation of the obtained achievable rate regions and outer bounds in Gaussian noise which illustrate the performance of the proposed protocols compared to simpler schemes, to each other, to the outer bounds, which highlight the relative gains achieved by network coding, random binning and compress-and-forward-type cooperation between terminal nodes.
|
Two-way communications were first considered by Shannon himself @cite_14 , in which he introduced inner and outer bounds on the capacity region of the two-way channel where two full-duplex nodes (which may transmit and receive simultaneously) wish to exchanges messages. Since full-duplex operation is, with current technology, of limited practical significance, in this work we assume that the nodes are half-duplex , i.e. at each point in time, a node can either transmit or receive symbols, but not both.
|
{
"cite_N": [
"@cite_14"
],
"mid": [
"2147248295"
],
"abstract": [
"We consider a three-node network where a relay node establishes a bidirectional communication between the two other nodes using a spectrally efficient decode-and-forward protocol. In the first phase we have the classical multiple-access channel where both nodes transmit a message to the relay node, which then decodes the messages. In the second phase the relay broadcasts a re-encoded composition of them based on the network coding idea. This means that each receiving node uses the same data stream to infer on its intended message. We characterize the optimal transmit strategy for the broadcast phase where either the relay node or the two other nodes are equipped with multiple antennas. Our main result shows that beamforming into the subspace spanned by the channels is always an optimal transmit strategy for the multiple-input single-output bidirectional broadcast channel. Thereby, it shows that correlation between the channels is advantageous. Moreover, this leads to a parametrization of the optimal transmit strategy which specifies the whole capacity region. In retrospect the results are intuitively clear since the single-beam transmit strategy reflects the single stream processing due to the network coding approach."
]
}
|
1002.0123
|
1660338801
|
In a bi-directional relay channel, a pair of nodes wish to exc hange independent messages over a shared wireless half-duplex channel with the help of relays. Recent work has mostly considered information theoretic limits of the bi-directional relay channel with t wo terminal nodes (or end users) and one relay. In this work we consider bi-directional relaying with one base station, multiple terminal nodes and one relay, all of which operate in half-duplex modes. We assume that each terminal node communicates with the base-station in a bi-directional fashion through t he relay and do not place any restrictions on the channels between the users, relays and base-stations; t hat is, each node has a direct link with every other node. Our contributions are three-fold: 1) the introd uction of four new temporal protocols which fully exploit the two-way nature of the data and outperform simple routing or multi-hop communication schemes by carefully combining network coding, random binning and user cooperation which exploit over-heard and own-message side information, 2) derivations of inner and outer bounds on the capacity region of the discrete-memoryless multi-pair two-way network, and 3) a numerical evaluation of the obtained achievable rate regions and outer bounds in Gaussian noise which illustrate the performance of the proposed protocols compared to simpler schemes, to each other, to the outer bounds, which highlight the relative gains achieved by network coding, random binning and compress-and-forward-type cooperation between terminal nodes.
|
@math In @cite_23 an interference network, with no direct links between terminal nodes, in which @math half-duplex single-antenna source destination pairs wish to exchange messages in a bi-directional fashion is investigates from a diversity-multiplexing gain perspective in the delay-limited high SNR regime.
|
{
"cite_N": [
"@cite_23"
],
"mid": [
"2120404432"
],
"abstract": [
"This paper considers an interference network composed of K half-duplex single-antenna pairs of users who wish to establish bi-directional communication with the aid of a multi-input-multi-output (MIMO) half-duplex relay node. This channel is referred to as the “MIMO Wireless Switch” since, for the sake of simplicity, our model assumes no direct link between the two end nodes of each pair implying that all communication must go through the relay node (i.e., the MIMO switch). Assuming a delay-limited scenario, the fundamental limits in the high signal-to-noise ratio (SNR) regime is analyzed using the diversity-multiplexing tradeoff (DMT) framework. Our results sheds light on the structure of optimal transmission schemes and the gain offered by the relay node in two distinct cases, namely reciprocal and non-reciprocal channels (between the relay and end-users). In particular, the existence of a relay node, equipped with a sufficient number of antennas, is shown to increase the multiplexing gain; as compared with the traditional fully connected K-pair interference channel. To the best of our knowledge, this is the first known example where adding a relay node results in enlarging the pre-log factor of the sum rate. Moreover, for the case of reciprocal channels, it is shown that, when the relay has a number of antennas at least equal to the sum of antennas of all the users, static time allocation of decode and forward (DF) type schemes is optimal. On the other hand, in the non-reciprocal scenario, we establish the optimality of dynamic decode and forward in certain relevant scenarios."
]
}
|
1002.0123
|
1660338801
|
In a bi-directional relay channel, a pair of nodes wish to exc hange independent messages over a shared wireless half-duplex channel with the help of relays. Recent work has mostly considered information theoretic limits of the bi-directional relay channel with t wo terminal nodes (or end users) and one relay. In this work we consider bi-directional relaying with one base station, multiple terminal nodes and one relay, all of which operate in half-duplex modes. We assume that each terminal node communicates with the base-station in a bi-directional fashion through t he relay and do not place any restrictions on the channels between the users, relays and base-stations; t hat is, each node has a direct link with every other node. Our contributions are three-fold: 1) the introd uction of four new temporal protocols which fully exploit the two-way nature of the data and outperform simple routing or multi-hop communication schemes by carefully combining network coding, random binning and user cooperation which exploit over-heard and own-message side information, 2) derivations of inner and outer bounds on the capacity region of the discrete-memoryless multi-pair two-way network, and 3) a numerical evaluation of the obtained achievable rate regions and outer bounds in Gaussian noise which illustrate the performance of the proposed protocols compared to simpler schemes, to each other, to the outer bounds, which highlight the relative gains achieved by network coding, random binning and compress-and-forward-type cooperation between terminal nodes.
|
@math The authors of @cite_52 consider a similar channel model and propose the use of a CDMA strategy to support multiple users so as to guarantee QoS to different users.
|
{
"cite_N": [
"@cite_52"
],
"mid": [
"2012049708"
],
"abstract": [
"We consider a multiuser two-way relay network where multiple pairs of users communicate with their pre-assigned partners, using a common intermediate relay node, in a two-phase communication scenario. In this system, a pair of partners transmit to the relay sharing a common spreading signature in the first phase, and the relay broadcasts an estimate of the XORed symbol for each user pair in the second phase employing the relaying scheme termed jointly demodulate-and-XOR forward (JD-XOR-F) in [1]. We investigate the joint power control and receiver optimization problem for this multiuser two-way relay system with JD-XOR-F relaying. We show that the total power optimization problem decouples into two subproblems, one for each phase. We construct the distributed power control and receiver updates in each phase which converge to the corresponding unique optimum. Simulation results are presented to demonstrate the significant power savings of the multiuser two-way relay system with the proposed iterative power control and receiver optimization algorithms, as compared to the designs with a iquestone-wayiquest communication perspective."
]
}
|
1002.0123
|
1660338801
|
In a bi-directional relay channel, a pair of nodes wish to exc hange independent messages over a shared wireless half-duplex channel with the help of relays. Recent work has mostly considered information theoretic limits of the bi-directional relay channel with t wo terminal nodes (or end users) and one relay. In this work we consider bi-directional relaying with one base station, multiple terminal nodes and one relay, all of which operate in half-duplex modes. We assume that each terminal node communicates with the base-station in a bi-directional fashion through t he relay and do not place any restrictions on the channels between the users, relays and base-stations; t hat is, each node has a direct link with every other node. Our contributions are three-fold: 1) the introd uction of four new temporal protocols which fully exploit the two-way nature of the data and outperform simple routing or multi-hop communication schemes by carefully combining network coding, random binning and user cooperation which exploit over-heard and own-message side information, 2) derivations of inner and outer bounds on the capacity region of the discrete-memoryless multi-pair two-way network, and 3) a numerical evaluation of the obtained achievable rate regions and outer bounds in Gaussian noise which illustrate the performance of the proposed protocols compared to simpler schemes, to each other, to the outer bounds, which highlight the relative gains achieved by network coding, random binning and compress-and-forward-type cooperation between terminal nodes.
|
@math In @cite_42 multiple bi-directional pairs communicate over a shared relay in the absence of a direct link between end nodes. Under a linear deterministic channel interaction model, an interesting equation-forwarding strategy is shown to be capacity-achieving. This intuition is transfered to the two-pair full-duplex bi-directional Gaussian relay network in @cite_10 , where a carefully constructed superposition scheme of random and lattice codes was used to achieved rates within 2 bits of the outer cut-set bound.
|
{
"cite_N": [
"@cite_42",
"@cite_10"
],
"mid": [
"2147204048",
"2107762903"
],
"abstract": [
"In this paper we study the capacity region of the multi-pair bidirectional (or two-way) wireless relay network, in which a relay node facilitates the communication between multiple pairs of users. This network is a generalization of the well known bidirectional relay channel, where we have only one pair of users. We examine this problem in the context of the deterministic channel interaction model, which eliminates the channel noise and allows us to focus on the interaction between signals. We characterize the capacity region of this network when the relay is operating at either full-duplex mode or half-duplex mode (with non adaptive listen-transmit scheduling). In both cases we show that the cut-set upper bound is tight and, quite interestingly, the capacity region is achieved by a simple equation-forwarding strategy.",
"We study the capacity of the Gaussian two-pair fullduplex directional (or two-way) relay network with a single-relay supporting the communication of the pairs. This network is a generalization of the well known bidirectional relay channel, where we have only one pair of users. We propose a novel transmission technique which is based on a specific superposition of lattice codes and random Gaussian codes at the source nodes. The relay attempts to decode the Gaussian codewords and the superposition of the lattice codewords of each pair. Then it forwards this information to all users. We analyze the achievable rate of this scheme and show that for all channel gains it achieves to within 2 bits sec Hz per user of the cut-set upper bound on the capacity region of the two-pair bidirectional relay network."
]
}
|
1002.0123
|
1660338801
|
In a bi-directional relay channel, a pair of nodes wish to exc hange independent messages over a shared wireless half-duplex channel with the help of relays. Recent work has mostly considered information theoretic limits of the bi-directional relay channel with t wo terminal nodes (or end users) and one relay. In this work we consider bi-directional relaying with one base station, multiple terminal nodes and one relay, all of which operate in half-duplex modes. We assume that each terminal node communicates with the base-station in a bi-directional fashion through t he relay and do not place any restrictions on the channels between the users, relays and base-stations; t hat is, each node has a direct link with every other node. Our contributions are three-fold: 1) the introd uction of four new temporal protocols which fully exploit the two-way nature of the data and outperform simple routing or multi-hop communication schemes by carefully combining network coding, random binning and user cooperation which exploit over-heard and own-message side information, 2) derivations of inner and outer bounds on the capacity region of the discrete-memoryless multi-pair two-way network, and 3) a numerical evaluation of the obtained achievable rate regions and outer bounds in Gaussian noise which illustrate the performance of the proposed protocols compared to simpler schemes, to each other, to the outer bounds, which highlight the relative gains achieved by network coding, random binning and compress-and-forward-type cooperation between terminal nodes.
|
@math Finally, in @cite_43 , an arbitrary number of clusters (nodes within a cluster all wish to exchange messages) of arbitrary numbers of full-duplex nodes are assumed to communicate simultaneously through the use of a single relay in AWGN. Nodes are not able to hear each other, and it is shown that CF achieves within a constant number of bits from capacity regardless of SNR; interesting conclusions are also drawn with respect to lattice coding versus CF.
|
{
"cite_N": [
"@cite_43"
],
"mid": [
"1982109272"
],
"abstract": [
"The multi-user communication channel, in which multiple users exchange information with the help of a single relay terminal, called the multi-way relay channel, is considered. In this model, multiple interfering clusters of users communicate simultaneously, where the users within the same cluster wish to exchange messages among themselves. It is assumed that the users cannot receive each other's signals directly, and hence the relay terminal is the enabler of communication. A relevant metric to study in this scenario is the symmetric rate achievable by all users, which we identify for amplify-and-forward (AF), decode-and-forward (DF) and compress-and-forward (CF) protocols. We also present an upper bound for comparison. The two extreme cases, namely full data exchange, in which every user wants to receive messages of all other users, and pairwise data exchange, consisting of multiple two-way relay channels, are investigated and presented in detail."
]
}
|
1002.0298
|
1903715846
|
This paper introduces the notion of a secure data capsule, which refers to an encapsulation of sensitive user information (such as a credit card number) along with code that implements an interface suitable for the use of such information (such as charging for purchases) by a service (such as an online merchant). In our capsule framework, users provide their data in the form of such capsules to web services rather than raw data. Capsules can be deployed in a variety of ways, either on a trusted third party or the user's own computer or at the service itself, through the use of a variety of hardware or software modules, such as a virtual machine monitor or trusted platform module: the only requirement is that the deployment mechanism must ensure that the user's data is only accessed via the interface sanctioned by the user. The framework further allows an user to specify policies regarding which services or machines may host her capsule, what parties are allowed to access the interface, and with what parameters. The combination of interface restrictions and policy control lets us bound the impact of an attacker who compromises the service to gain access to the user's capsule or a malicious insider at the service itself.
|
Privacy frameworks that require only participation from users have been proposed as an alternative to web services. VIS @cite_0 maintains a social network is maintained in a completely decentralized fashion by users hosting their data on trusted parties of their own choice; there is no centralized web service. Capsules are more compatible with the current ecosystem of a web service storing user's data and rely on the use of interfaces to guarantee privacy. NOYB @cite_50 and LockR @cite_7 are two recent proposals that rely on end-to-end encryption to hide data from social networks; both these approaches are specific to social networks, and their mechanisms can be incorporated in the capsule framework as well, if so desired.
|
{
"cite_N": [
"@cite_0",
"@cite_7",
"@cite_50"
],
"mid": [
"2169270531",
"2103230119",
"2146097977"
],
"abstract": [
"People increasingly generate content on their mobile devices and upload it to third-party services such as Facebook and Google Latitude for sharing and backup purposes. Although these services are convenient and useful, their use has important privacy implications due to their centralized nature and their acquisitions of rights to user-contributed content. This paper argues that people's interests would be be better served by uploading their data to a machine that they themselves own and control. We term these machines Virtual Individual Servers (VISs) because our preferred instantiation is a virtual machine running in a highly-available utility computing infrastructure. By using VISs, people can better protect their privacy because they retain ownership of their data and remain in control over the software and policies that determine what data is shared with whom. This paper also describes a range of applications of VIS proxies. It then presents our initial implementation and evaluation of one of these applications, a decentralized framework for mobile social services based on VISs. Our experience so far suggests that building such applications on top of the VIS concept is feasible and desirable.",
"Today's online social networking (OSN) sites do little to protect the privacy of their users' social networking information. Given the highly sensitive nature of the information these sites store, it is understandable that many users feel victimized and disempowered by OSN providers' terms of service. This paper presents Lockr, a system that improves the privacy of centralized and decentralized online content sharing systems. Lockr offers three significant privacy benefits to OSN users. First, it separates social networking content from all other functionality that OSNs provide. This decoupling lets users control their own social information: they can decide which OSN provider should store it, which third parties should have access to it, or they can even choose to manage it themselves. Such flexibility better accommodates OSN users' privacy needs and preferences. Second, Lockr ensures that digitally signed social relationships needed to access social data cannot be re-used by the OSN for unintended purposes. This feature drastically reduces the value to others of social content that users entrust to OSN providers. Finally, Lockr enables message encryption using a social relationship key. This key lets two strangers with a common friend verify their relationship without exposing it to others, a common privacy threat when sharing data in a decentralized scenario. This paper relates Lockr's design and implementation and shows how we integrate it with Flickr, a centralized OSN, and BitTorrent, a decentralized one. Our implementation demonstrates Lockr's critical primary benefits for privacy as well as its secondary benefits for simplifying site management and accelerating content delivery. These benefits were achieved with negligible performance cost and overhead.",
"Increasingly, Internet users trade privacy for service. Facebook, Google, and others mine personal information to target advertising. This paper presents a preliminary and partial answer to the general question \"Can users retain their privacy while still benefiting from these web services?\". We propose NOYB, a novel approach that provides privacy while preserving some of the functionality provided by online services. We apply our approach to the Facebook online social networking website. Through a proof-of-concept implementation we demonstrate that NOYB is practical and incrementally deployable, requires no changes to or cooperation from an existing online service, and indeed can be non-trivial for the online service to detect."
]
}
|
1002.0298
|
1903715846
|
This paper introduces the notion of a secure data capsule, which refers to an encapsulation of sensitive user information (such as a credit card number) along with code that implements an interface suitable for the use of such information (such as charging for purchases) by a service (such as an online merchant). In our capsule framework, users provide their data in the form of such capsules to web services rather than raw data. Capsules can be deployed in a variety of ways, either on a trusted third party or the user's own computer or at the service itself, through the use of a variety of hardware or software modules, such as a virtual machine monitor or trusted platform module: the only requirement is that the deployment mechanism must ensure that the user's data is only accessed via the interface sanctioned by the user. The framework further allows an user to specify policies regarding which services or machines may host her capsule, what parties are allowed to access the interface, and with what parameters. The combination of interface restrictions and policy control lets us bound the impact of an attacker who compromises the service to gain access to the user's capsule or a malicious insider at the service itself.
|
The area of cloud computing has seen a lot of work in the context of privacy as well. These include Trusted Cloud @cite_16 , Accountable Cloud @cite_11 , Cloud Provenance @cite_49 . These works deal with the more complex problem of guaranteeing correctness of code execution on an untrusted third party. Capsules are only concerned to protecting the privacy of the user's data; we assume the application service carries out the service (such as sending correct ticker data) as expected. Airavat @cite_40 proposes a privacy-preserving version of MapReduce based on information flow control and differential privacy; capsules support general kinds of computation, but have to rely on manual re-factoring whereas Airavat can automatically ensure privacy by restricting itself to specific types of MapReduce computations. We also note that attacks based on leakage across VMs are known @cite_37 and defense mechanisms against such attacks are also being developed @cite_1 . Our capsule framework can avail of such defense mechanisms as they are developed further; we view such work as orthongal to our central goal.
|
{
"cite_N": [
"@cite_37",
"@cite_1",
"@cite_40",
"@cite_49",
"@cite_16",
"@cite_11"
],
"mid": [
"2119028650",
"",
"192814132",
"2115466528",
"1922956467",
"2169757402"
],
"abstract": [
"Third-party cloud computing represents the promise of outsourcing as applied to computation. Services, such as Microsoft's Azure and Amazon's EC2, allow users to instantiate virtual machines (VMs) on demand and thus purchase precisely the capacity they require when they require it. In turn, the use of virtualization allows third-party cloud providers to maximize the utilization of their sunk capital costs by multiplexing many customer VMs across a shared physical infrastructure. However, in this paper, we show that this approach can also introduce new vulnerabilities. Using the Amazon EC2 service as a case study, we show that it is possible to map the internal cloud infrastructure, identify where a particular target VM is likely to reside, and then instantiate new VMs until one is placed co-resident with the target. We explore how such placement can then be used to mount cross-VM side-channel attacks to extract information from a target VM on the same machine.",
"",
"We present Airavat, a MapReduce-based system which provides strong security and privacy guarantees for distributed computations on sensitive data. Airavat is a novel integration of mandatory access control and differential privacy. Data providers control the security policy for their sensitive data, including a mathematical bound on potential privacy violations. Users without security expertise can perform computations on the data, but Airavat confines these computations, preventing information leakage beyond the data provider's policy. Our prototype implementation demonstrates the flexibility of Airavat on several case studies. The prototype is efficient, with run times on Amazon's cloud computing infrastructure within 32 of a MapReduce system with no security.",
"Digital provenance is meta-data that describes the ancestry or history of a digital object. Most work on provenance focuses on how provenance increases the value of data to consumers. However, provenance is also valuable to storage providers. For example, provenance can provide hints on access patterns, detect anomalous behavior, and provide enhanced user search capabilities. As the next generation storage providers, cloud vendors are in the unique position to capitalize on this opportunity to incorporate provenance as a fundamental storage system primitive. To date, cloud offerings have not yet done so. We provide motivation for providers to treat provenance as first class data in the cloud and based on our experience with provenance in a local storage system, suggest a set of requirements that make provenance feasible and attractive.",
"Cloud computing infrastructures enable companies to cut costs by outsourcing computations on-demand. However, clients of cloud computing services currently have no means of verifying the confidentiality and integrity of their data and computation. To address this problem we propose the design of a trusted cloud computing platform (TCCP). TCCP enables Infrastructure as a Service (IaaS) providers such as Amazon EC2 to provide a closed box execution environment that guarantees confidential execution of guest virtual machines. Moreover, it allows users to attest to the IaaS provider and determine whether or not the service is secure before they launch their virtual machines.",
"For many companies, clouds are becoming an interesting alternative to a dedicated IT infrastructure. However, cloud computing also carries certain risks for both the customer and the cloud provider. The customer places his computation and data on machines he cannot directly control; the provider agrees to run a service whose details he does not know. If something goes wrong - for example, data leaks to a competitor, or the computation returns incorrect results - it can be difficult for customer and provider to determinewhich of themhas caused the problem, and, in the absence of solid evidence, it is nearly impossible for them to hold each other responsible for the problem if a dispute arises. In this paper, we propose that the cloud should be made accountable to both the customer and the provider. Both parties should be able to check whether the cloud is running the service as agreed. If a problem appears, they should be able to determine which of them is responsible, and to prove the presence of the problem to a third party, such as an arbitrator or a judge. We outline the technical requirements for an accountable cloud, and we describe several challenges that are not yet met by current accountability techniques."
]
}
|
1002.0298
|
1903715846
|
This paper introduces the notion of a secure data capsule, which refers to an encapsulation of sensitive user information (such as a credit card number) along with code that implements an interface suitable for the use of such information (such as charging for purchases) by a service (such as an online merchant). In our capsule framework, users provide their data in the form of such capsules to web services rather than raw data. Capsules can be deployed in a variety of ways, either on a trusted third party or the user's own computer or at the service itself, through the use of a variety of hardware or software modules, such as a virtual machine monitor or trusted platform module: the only requirement is that the deployment mechanism must ensure that the user's data is only accessed via the interface sanctioned by the user. The framework further allows an user to specify policies regarding which services or machines may host her capsule, what parties are allowed to access the interface, and with what parameters. The combination of interface restrictions and policy control lets us bound the impact of an attacker who compromises the service to gain access to the user's capsule or a malicious insider at the service itself.
|
PrivAD @cite_12 and Adnostic @cite_34 are recent proposals that are client-side systems for targeted advertising; this is somewhat similar to a client-side targeted ads capsule in our framework. The difference is that our framework generalizes to other deployment scenarios as well. In the future, we hope to borrow their techniques for anonymized ad impression collection in our targeted ads capsule as well; currently, we do this using aggregation, PrivAD and Adnostic offer dealer-based and encryption-based mechanisms respectively for this purpose.
|
{
"cite_N": [
"@cite_34",
"@cite_12"
],
"mid": [
"2189109560",
"1795861175"
],
"abstract": [
"Online behavioral advertising (OBA) refers to the practice of tracking users across web sites in order to infer user interests and preferences. These interests and preferences are then used for selecting ads to present to the user. There is great concern that behavioral advertising in its present form infringes on user privacy. The resulting public debate — which includes consumer advocacy organizations, professional associations, and government agencies — is premised on the notion that OBA and privacy are inherently in conflict.In this paper we propose a practical architecture that enables targeting without compromising user privacy. Behavioral profiling and targeting in our system takes place in the user’s browser. We discuss the effectiveness of the system as well as potential social engineering and web-based attacks on the architecture. One complication is billing; ad-networks must bill the correct advertiser without knowing which ad was displayed to the user. We propose an efficient cryptographic billing system that directly solves the problem. We implemented the core targeting system as a Firefox extension and report on its effectiveness.\u0000",
"Online advertising is a major economic force in the Internet today. Today’s deployments, however, erode privacy and degrade performance as browsers wait for ad networks to deliver ads. In this paper we pose the question: is it possible to build a practical private online advertising system? To this end we present an initial design where ads are served from the endhost. It is attractive from three standpoints — privacy, profit, and performance: tracking the user’s profile on their computer and not at a third-party improves privacy; better targeting and potentially lower operating costs can improve profits; and relying more on the local endhost rather than a distant central third-party can improve performance. In this paper we explore whether such a system is practical with an eye towards scalability, costs, and deployability. Based on a feasibility study conducted over traces from over 31K users and 130K ads, we believe our approach holds much promise."
]
}
|
1002.0298
|
1903715846
|
This paper introduces the notion of a secure data capsule, which refers to an encapsulation of sensitive user information (such as a credit card number) along with code that implements an interface suitable for the use of such information (such as charging for purchases) by a service (such as an online merchant). In our capsule framework, users provide their data in the form of such capsules to web services rather than raw data. Capsules can be deployed in a variety of ways, either on a trusted third party or the user's own computer or at the service itself, through the use of a variety of hardware or software modules, such as a virtual machine monitor or trusted platform module: the only requirement is that the deployment mechanism must ensure that the user's data is only accessed via the interface sanctioned by the user. The framework further allows an user to specify policies regarding which services or machines may host her capsule, what parties are allowed to access the interface, and with what parameters. The combination of interface restrictions and policy control lets us bound the impact of an attacker who compromises the service to gain access to the user's capsule or a malicious insider at the service itself.
|
The capsule framework builds on existing isolation mechanisms, such as the virtual machine security architecture ( Terra @cite_29 ), proposals that use TPMs ( Flicker @cite_32 ), and systems based on a secure co-processor @cite_36 . These proposals offer the foundation upon which our capsule framework can build on to provide useful guarantees for data owners. Our implementation also borrows existing mechanisms (XenSocket @cite_19 , vTPM @cite_10 , disaggregation @cite_26 , use of hardware virtualization features @cite_14 ) that help us improve the performance and security of a virtual machine based architecture. Currently, we do not provide any automatic re-factoring mechanisms; in the future, we hope to explore using existing program partioning approaches ( Swift @cite_35 , PrivTrans @cite_21 ) for this purpose.
|
{
"cite_N": [
"@cite_35",
"@cite_26",
"@cite_14",
"@cite_36",
"@cite_29",
"@cite_21",
"@cite_32",
"@cite_19",
"@cite_10"
],
"mid": [
"2166510103",
"2072633121",
"2168760272",
"185166390",
"2166004296",
"1809664600",
"2167804035",
"1483930278",
"1729172517"
],
"abstract": [
"Swift is a new, principled approach to building web applications that are secure by construction. In modern web applications, some application functionality is usually implemented as client-side code written in JavaScript. Moving code and data to the client can create security vulnerabilities, but currently there are no good methods for deciding when it is secure to do so. Swift automatically partitions application code while providing assurance that the resulting placement is secure and efficient. Application code is written as Java-like code annotated with information flow policies that specify the confidentiality and integrity of web application information. The compiler uses these policies to automatically partition the program into JavaScript code running in the browser, and Java code running on the server. To improve interactive performance, code and data are placed on the client side. However, security-critical code and data are always placed on the server. Code and data can also be replicated across the client and server, to obtain both security and performance. A max-flow algorithm is used to place code and data in a way that minimizes client-server communication.",
"Virtual machine monitors (VMMs) have been hailed as the basis for an increasing number of reliable or trusted computing systems. The Xen VMM is a relatively small piece of software -- a hypervisor -- that runs at a lower level than a conventional operating system in order to provide isolation between virtual machines: its size is offered as an argument for its trustworthiness. However, the management of a Xen-based system requires a privileged, full-blown operating system to be included in the trusted computing base (TCB). In this paper, we introduce our work to disaggregate the management virtual machine in a Xen-based system. We begin by analysing the Xen architecture and explaining why the status quo results in a large TCB. We then describe our implementation, which moves the domain builder, the most important privileged component, into a minimal trusted compartment. We illustrate how this approach may be used to implement \"trusted virtualisation\" and improve the security of virtual TPM implementations. Finally, we evaluate our approach in terms of the reduction in TCB size, and by performing a security analysis of the disaggregated system.",
"Kernel-level attacks or rootkits can compromise the security of an operating system by executing with the privilege of the kernel. Current approaches use virtualization to gain higher privilege over these attacks, and isolate security tools from the untrusted guest VM by moving them out and placing them in a separate trusted VM. Although out-of-VM isolation can help ensure security, the added overhead of world-switches between the guest VMs for each invocation of the monitor makes this approach unsuitable for many applications, especially fine-grained monitoring. In this paper, we present Secure In-VM Monitoring (SIM), a general-purpose framework that enables security monitoring applications to be placed back in the untrusted guest VM for efficiency without sacrificing the security guarantees provided by running them outside of the VM. We utilize contemporary hardware memory protection and hardware virtualization features available in recent processors to create a hypervisor protected address space where a monitor can execute and access data in native speeds and to which execution is transferred in a controlled manner that does not require hypervisor involvement. We have developed a prototype into KVM utilizing Intel VT hardware virtualization technology. We have also developed two representative applications for the Windows OS that monitor system calls and process creations. Our microbenchmarks show at least 10 times performance improvement in invocation of a monitor inside SIM over a monitor residing in another trusted VM. With a systematic security analysis of SIM against a number of possible threats, we show that SIM provides at least the same security guarantees as what can be achieved by out-of-VM monitors.",
"Abstract : How do we build distributed systems that are secure? Cryptographic techniques can be used to secure the communications between physically separated systems, but this is not enough: we must be able to guarantee the privacy of the cryptographic keys and the integrity of the cryptographic functions, in addition to the integrity of the security kernel and access control databases we have on the machines. Physical security is a central assumption upon which secure distributed systems are built; without this foundation even the best cryptosystem or the most secure kernel will crumble. In this thesis, I address the distributed security problem by proposing the addition of a small, physically secure hardware module, a secure coprocessor, to standard workstations and PCs. My central axiom is that secure coprocessors are able to maintain the privacy of the data they process. This thesis attacks the distributed security problem from multiple sides. First, I analyze the security properties of existing system components, both at the hardware and software level. Second, I demonstrate how physical security requirements may be isolated to the secure coprocessor, and showed how security properties may be bootstrapped using cryptographic techniques from this central nucleus of security within a combined hardware software architecture.",
"We present a flexible architecture for trusted computing, called Terra, that allows applications with a wide range of security requirements to run simultaneously on commodity hardware. Applications on Terra enjoy the semantics of running on a separate, dedicated, tamper-resistant hardware platform, while retaining the ability to run side-by-side with normal applications on a general-purpose computing platform. Terra achieves this synthesis by use of a trusted virtual machine monitor (TVMM) that partitions a tamper-resistant hardware platform into multiple, isolated virtual machines (VM), providing the appearance of multiple boxes on a single, general-purpose platform. To each VM, the TVMM provides the semantics of either an \"open box,\" i.e. a general-purpose hardware platform like today's PCs and workstations, or a \"closed box,\" an opaque special-purpose platform that protects the privacy and integrity of its contents like today's game consoles and cellular phones. The software stack in each VM can be tailored from the hardware interface up to meet the security requirements of its application(s). The hardware and TVMM can act as a trusted party to allow closed-box VMs to cryptographically identify the software they run, i.e. what is in the box, to remote parties. We explore the strengths and limitations of this architecture by describing our prototype implementation and several applications that we developed for it.",
"Privilege separation partitions a single program into two parts: a privileged program called the monitor and an unprivileged program called the slave. All trust and privileges are relegated to the monitor, which results in a smaller and more easily secured trust base. Previously the privilege separation procedure, i.e., partitioning one program into the monitor and slave, was done by hand [18, 28]. We design techniques and develop a tool called Privtrans that allows us to automatically integrate privilege separation into source code, provided a few programmer annotations. For instance, our approach can automatically integrate the privilege separation previously done by hand in OpenSSH, while enjoying similar security benefits. Additionally, we propose optimization techniques that augment static analysis with dynamic information. Our optimization techniques reduce the number of expensive calls made by the slave to the monitor. We show Privtrans is effective by integrating privilege separation into several open-source applications.",
"We present Flicker, an infrastructure for executing security-sensitive code in complete isolation while trusting as few as 250 lines of additional code. Flicker can also provide meaningful, fine-grained attestation of the code executed (as well as its inputs and outputs) to a remote party. Flicker guarantees these properties even if the BIOS, OS and DMA-enabled devices are all malicious. Flicker leverages new commodity processors from AMD and Intel and does not require a new OS or VMM. We demonstrate a full implementation of Flicker on an AMD platform and describe our development environment for simplifying the construction of Flicker-enabled code.",
"This paper presents the design and implementation of XenSocket, a UNIX-domain-socket-like construct for high-throughput in-terdomain (VM-to-VM) communication on the same system. The design of XenSocket replaces the Xen page-flipping mechanism with a static circular memory buffer shared between two domains, wherein information is written by one domain and read asynchronously by the other domain. XenSocket draws on best-practice work in this field and avoids incurring the overhead of multiple hypercalls and memory page table updates by aggregating what were previously multiple operations on multiple network packets into one or more large operations on the shared buffer. While the reference implementation (and name) of XenSocket is written against the Xen virtual machine monitor, the principle behind XenSocket applies broadly across the field of virtual machines.",
"We present the design and implementation of a system that enables trusted computing for an unlimited number of virtual machines on a single hardware platform. To this end, we virtualized the Trusted Platform Module (TPM). As a result, the TPM's secure storage and cryptographic functions are available to operating systems and applications running in virtual machines. Our new facility supports higher-level services for establishing trust in virtualized environments, for example remote attestation of software integrity. We implemented the full TPM specification in software and added functions to create and destroy virtual TPM instances. We integrated our software TPM into a hypervisor environment to make TPM functions available to virtual machines. Our virtual TPM supports suspend and resume operations, as well as migration of a virtual TPM instance with its respective virtual machine across platforms. We present four designs for certificate chains to link the virtual TPM to a hardware TPM, with security vs. efficiency trade-offs based on threat models. Finally, we demonstrate a working system by layering an existing integrity measurement application on top of our virtual TPM facility."
]
}
|
1002.0712
|
1858690290
|
In this paper we present the Chelonia storage cloud middleware. It was designed to fill the requirements gap between those of large, sophisticated scientific collaborations which have adopted the grid paradigm for their distributed storage needs, and of corporate business communities which are gravitating towards the cloud paradigm. The similarities to and differences between Chelonia and several well-known grid- and cloud-based storage solutions are commented. The design of Chelonia has been chosen to optimize high reliability and scalability of an integrated system of heterogeneous, geographically dispersed storage sites and the ability to easily expand the system dynamically. The architecture and implementation in term of web-services running inside the Advanced Resource Connector Hosting Environment Dameon (ARC HED) are described. We present results of tests in both local-area and wide-area networks that demonstrate the fault-tolerance, stability and scalability of Chelonia.
|
Unlike Chelonia, iRODS @cite_4 does not provide any storage itself but is more an interface to other, third-party storage systems. Based on the client-server model, iRODS provides a flexible data grid management system. It allows uniform access to heterogeneous storage resources over a wide area network. Its functionality, with a uniform namespace for several Data Grid Managers and file systems, is quite similar to the functionality offered by our gateway module. However, iRODS uses a database system for maintaining the attributes and states of data and operations. This is not needed with Chelonia's gateway modules.
|
{
"cite_N": [
"@cite_4"
],
"mid": [
"2183382725"
],
"abstract": [
"Data Grids are used for managing massive amounts of data (Peta scale) that are distributed across heterogeneous storage systems. As such they are complex in nature and deal with multiple operations in the life-cycle of a data set from creation to usage to preservation to final disposition. Administering a data grid can be very challenging (not only for system administrators, but also for data providers and user communities). Data grids are reactive systems that handle events based on contextual information. They also maintain transactional capabilities in order to ensure consistency across distributed storage systems. We are developing a data grid system called integrated Rule Oriented Data Systems (iRODS) manage the phases of the data life-cycle using ECA-type rules. Such a system not only captures the complex operational policies of a data grid but also provides a declarative semantics for describing event processing based on a side effects ontology and context information stored in the data grid. In this paper we describe the event management and processing being implemented in iRODS and how a distributed rule engine is used to handle actions in a data grid. The iRODs data grid can be viewed as a complex, distributed event processing system providing data life-cycle management capabilities using a rule-oriented architecture."
]
}
|
1002.0855
|
2168164497
|
We study a slotted version of the Aloha Medium Access (MAC) protocol in a Mobile Ad-hoc Network (MANET). Our model features transmitters randomly located in the Euclidean plane, according to a Poisson point process and a set of receivers representing the next-hop from every transmitter. We concentrate on the so-called outage scenario, where a successful transmission requires a Signal-to-Interference-and-Noise (SINR) larger than some threshold. We analyze the local delays in such a network, namely the number of times slots required for nodes to transmit a packet to their prescribed next-hop receivers. The analysis depends very much on the receiver scenario and on the variability of the fading. In most cases, each node has finite-mean geometric random delay and thus a positive next hop throughput. However, the spatial (or large population) averaging of these individual finite mean-delays leads to infinite values in several practical cases, including the Rayleigh fading and positive thermal noise case. In some cases it exhibits an interesting phase transition phenomenon where the spatial average is finite when certain model parameters (receiver distance, thermal noise, Aloha medium access probability) are below a threshold and infinite above. To the best of our knowledge, this phenomenon, which we propose to call the wireless contention phase transition, has not been discussed in the literature. We comment on the relationships between the above facts and the heavy tails found in the so-called "RESTART" algorithm. We argue that the spatial average of the mean local delays is infinite primarily because of the outage logic, where one transmits full packets at time slots when the receiver is covered at the required SINR and where one wastes all the other time slots. This results in the "RESTART" mechanism, which in turn explains why we have infinite spatial average. Adaptive coding offers another nice way of breaking the outage RESTART logic. We show examples where the average delays are finite in the adaptive coding case, whereas they are infinite in the outage case.
|
As already mentioned, the present paper assumes a full time-scale separation for the mobility on one side and for the MAC and physical layer on the other side. This assumption makes a major difference between what is done in this paper and what is done in DTNs, where one leverages node mobility to contribute to the transport of packets. There is a large number of publications on the throughput in DTNs and we will not review the literature on the topic which is huge. Let us nevertheless stress that there are some interesting connections between the line of though started in ( @cite_10 ), where it was first shown that mobility increases capacity and what is done in . We show in this section that mobility helps in a way which is quite different from that considered in @cite_10 : mobility may in certain cases break dependence and hence mitigate the RESTART phenomenon, it may hence decrease the mean local delay of the typical node (or equivalently increase its throughput), even if one does not use mobility to transport packets .
|
{
"cite_N": [
"@cite_10"
],
"mid": [
"2149959815"
],
"abstract": [
"The capacity of ad hoc wireless networks is constrained by the mutual interference of concurrent transmissions between nodes. We study a model of an ad hoc network where n nodes communicate in random source-destination pairs. These nodes are assumed to be mobile. We examine the per-session throughput for applications with loose delay constraints, such that the topology changes over the time-scale of packet delivery. Under this assumption, the per-user throughput can increase dramatically when nodes are mobile rather than fixed. This improvement can be achieved by exploiting a form of multiuser diversity via packet relaying."
]
}
|
1002.0855
|
2168164497
|
We study a slotted version of the Aloha Medium Access (MAC) protocol in a Mobile Ad-hoc Network (MANET). Our model features transmitters randomly located in the Euclidean plane, according to a Poisson point process and a set of receivers representing the next-hop from every transmitter. We concentrate on the so-called outage scenario, where a successful transmission requires a Signal-to-Interference-and-Noise (SINR) larger than some threshold. We analyze the local delays in such a network, namely the number of times slots required for nodes to transmit a packet to their prescribed next-hop receivers. The analysis depends very much on the receiver scenario and on the variability of the fading. In most cases, each node has finite-mean geometric random delay and thus a positive next hop throughput. However, the spatial (or large population) averaging of these individual finite mean-delays leads to infinite values in several practical cases, including the Rayleigh fading and positive thermal noise case. In some cases it exhibits an interesting phase transition phenomenon where the spatial average is finite when certain model parameters (receiver distance, thermal noise, Aloha medium access probability) are below a threshold and infinite above. To the best of our knowledge, this phenomenon, which we propose to call the wireless contention phase transition, has not been discussed in the literature. We comment on the relationships between the above facts and the heavy tails found in the so-called "RESTART" algorithm. We argue that the spatial average of the mean local delays is infinite primarily because of the outage logic, where one transmits full packets at time slots when the receiver is covered at the required SINR and where one wastes all the other time slots. This results in the "RESTART" mechanism, which in turn explains why we have infinite spatial average. Adaptive coding offers another nice way of breaking the outage RESTART logic. We show examples where the average delays are finite in the adaptive coding case, whereas they are infinite in the outage case.
|
Among recent papers, that we are aware of, on the time-space analysis of MANETs, we would quote @cite_1 and @cite_3 . The former focuses on node motion alone and assumes that nodes within transmission range can transmit packets instantaneously. The authors then study the speed at which some multicast information propagates on a Poisson MANET where nodes have independent motion (of the random walk or random way-point type). The latter focuses on a first passage percolation problem, however, the model used in @cite_3 is the so called protocol model; its analysis significantly differs from that of our physical (SINR-based) model. In particular, there is no notion of local delay.
|
{
"cite_N": [
"@cite_1",
"@cite_3"
],
"mid": [
"2949226357",
"2027916448"
],
"abstract": [
"The goal of this paper is to increase our understanding of the fundamental performance limits of mobile and Delay Tolerant Networks (DTNs), where end-to-end multi-hop paths may not exist and communication routes may only be available through time and mobility. We use analytical tools to derive generic theoretical upper bounds for the information propagation speed in large scale mobile and intermittently connected networks. In other words, we upper-bound the optimal performance, in terms of delay, that can be achieved using any routing algorithm. We then show how our analysis can be applied to specific mobility and graph models to obtain specific analytical estimates. In particular, in two-dimensional networks, when nodes move at a maximum speed @math and their density @math is small (the network is sparse and surely disconnected), we prove that the information propagation speed is upper bounded by ( @math in the random way-point model, while it is upper bounded by @math for other mobility models (random walk, Brownian motion). We also present simulations that confirm the validity of the bounds in these scenarios. Finally, we generalize our results to one-dimensional and three-dimensional networks.",
"In a wireless network, the set of transmitting nodes changes frequently because of the MAC scheduler and the traffic load. Analyzing the connectivity of such a network using static graphs would lead to pessimistic performance results. In this paper, we consider an ad hoc network with half-duplex radios that uses multihop routing and slotted ALOHA for the network MAC contention and introduce a random dynamic multi-digraph to model its connectivity. We first provide analytical results about the degree distribution of the graph. Next, defining the path formation time as the minimum time required for a causal path to form between the source and destination on the dynamic graph, we derive the distributional properties of the connection delay using techniques from first-passage percolation and epidemic processes.We show that the delay scales linearly with the distance and provide asymptotic results (with respect to time) for the positions of the nodes which are able to receive information from a transmitter located at the origin. We also provide simulation results to support the theoretical results."
]
}
|
1002.1099
|
1842842242
|
In this work, we discuss multiplayer pervasive games that rely on the use of ad hoc mobile sensor networks. The unique feature in such games is that players interact with each other and their surrounding environment by using movement and presence as a means of performing game-related actions, utilizing sensor devices. We discuss the fundamental issues and challenges related to these type of games and the scenarios associated with them. We also present and evaluate an example of such a game, called the "Hot Potato", developed using the Sun SPOT hardware platform. We provide a set of experimental results, so as to both evaluate our implementation and also to identify issues that arise in pervasive games which utilize sensor network nodes, which show that there is great potential in this type of games.
|
There is a large body of work regarding the pervasive games genre. The aim of the IPerG EU-funded project @cite_3 was the investigation of the pervasive gaming experience and the implementation of a series of showcase'' pervasive games. Several other works present implementations of pervasive games, see e.g., @cite_11 @cite_5 . Some examples of well-known pervasive games are , , @cite_15 , @cite_0 . In @cite_10 several interesting issues are raised, regarding the theory around pervasive games. In @cite_15 the authors evaluated how people perceive and play a pervasive game in normal, everyday settings. In general, most works focus on the design issues raised by specific games; some of these works additionally try to generalize such issues regarding the design of pervasive games overall, @cite_7 and @cite_2 or provide surveys of existing approaches. In @cite_13 the authors present scenarios that show the intended characteristics of pervasive multiplayer games and propose services for the development and deployment of crossmedia games, i.e., games that are played on multiple platforms with varying features.
|
{
"cite_N": [
"@cite_13",
"@cite_7",
"@cite_3",
"@cite_0",
"@cite_2",
"@cite_5",
"@cite_15",
"@cite_10",
"@cite_11"
],
"mid": [
"2053740704",
"2067129044",
"",
"1966682933",
"1542098878",
"",
"2142772578",
"2075986028",
"2408510291"
],
"abstract": [
"Crossmedia games are a genre of pervasive gaming where a single game instance can be played with a variety of heterogeneous devices that support different forms of players' participation and deliver different game experiences. In this article we present the PM2G initiative, a service-oriented architecture aiming to support crossmedia game development and execution. Due to their relevance in this document, content adaptation and interaction adaptation services are discussed in detail. We also present, as a case study, a game called Pervasive Wizards, which is used to validate our architecture. Finally, we present some performance results obtained in our experiments.",
"A new generation of entertainment technology takes computer games to the streets---and ultimately beyond.",
"",
"Human Pacman is an interactive ubiquitous and mobile entertainment system that is built upon position and perspective sensing via Global Positioning System and inertia sensors; and tangible human-computer interfacing with the use of Bluetooth and capacitive sensors. Although these sensing-based subsystems are weaved into the fabric of the game and are therefore translucent to players, they are nevertheless the technical enabling forces behind Human Pacman. The game strives to bring the computer gaming experience to a new level of emotional and sensory gratification by embedding the natural physical world ubiquitously and seamlessly with a fantasy virtual playground. We have progressed from the old days of 2D arcade Pacman on screens, with incremental development, to the popular 3D game console Pacman, and the recent mobile online Pacman. With our novel Human Pacman, we have a physical role-playing computer fantasy together with real human-social and mobile-gaming that emphasizes on collaboration and competition between players in a wide outdoor physical area that allows natural wide-area human-physical movements. Pacmen and Ghosts are now real human players in the real world experiencing mixed computer graphics fantasy-reality provided by using wearable computers that are equipped with GPS and inertia sensors for players' position and perspective tracking. Virtual cookies and actual tangible physical objects with Bluetooth devices and capacitive sensors are incorporated into the game play to provide novel experiences of seamless transitions between real and virtual worlds. In short, we believe Human Pacman is pioneering a new form of gaming that is based on sensing technology and anchored on physicality, mobility, social interaction, and ubiquitous computing.",
"Pervasive games are an emerging new game genre, which includes context information as an integral part of the game. These games differ from traditional games in that they expand spatio-temporal and social aspects of gaming. Mobile devices support this by enabling players to choose when and where a game is played. Designing pervasive games can be a challenging task, since it is not only limited to the virtual game world, but designers must consider information flow from the real world into the game world and vice versa. In this paper, we describe a user study with an experimental pervasive multiplayer mobile game. The objective was to understand how the players perceive pervasiveness in the game and what the crucial factors are in the design. Based on the results, we propose initial design guidelines and compare them to other design guidelines for the pervasive games.",
"",
"In this paper, we've made an initial attempt to explore the three dimensions of pervasive game play in the context of people's everyday life. Using an advanced prototype of SupaFly, a pervasive game developed by the former company It's Alive (now part of Daydream), we've evaluated how people perceive and play the game in normal, everyday settings. Our evaluation focused on how the players judged the designers' attempts to incorporate the three dimensions in the game.",
"In this article we attempt to describe and analyze the formalisms of pervasive games and pervasive gaming (PG). As the title indicates, PG consists of atomic entities that nevertheless merge into a molecular structure that exhibits emergent features during the actual game-play. The article introduces four axes of PG (mobility, distribution, persistence, and transmediality). Further, it describes and analyses three key units of PG (rules, entities, and mechanics) as well as discusses the role of space in PG by differentiating between tangible space, information-embedded space, and accessibility space. The article is generally concerned with classifying the indispensable components of pervasive games and, in addition, it lists the invariant features of pervasive game-play, meaning the epistemology that is tied to this new kind of gaming situated on the borderline between corporeal and immaterial space. Of particular interest are game rules in pervasive gaming, since they seem to touch upon both the underlying, formal structure of the game (i.e., the ontology of PG) and the actual play vis-a-vis physical and or virtual constraints (i.e., the PG epistemology).",
""
]
}
|
1002.1104
|
2953200881
|
As advances in technology allow for the collection, storage, and analysis of vast amounts of data, the task of screening and assessing the significance of discovered patterns is becoming a major challenge in data mining applications. In this work, we address significance in the context of frequent itemset mining. Specifically, we develop a novel methodology to identify a meaningful support threshold s* for a dataset, such that the number of itemsets with support at least s* represents a substantial deviation from what would be expected in a random dataset with the same number of transactions and the same individual item frequencies. These itemsets can then be flagged as statistically significant with a small false discovery rate. We present extensive experimental results to substantiate the effectiveness of our methodology.
|
A statistical approach for identifying significant itemsets is presented in @cite_1 , where the measure of interest for an itemset is defined as the degree of dependence among its constituent items, which is assessed through a @math test. Unfortunately, as reported in @cite_11 @cite_20 , there are technical flaws in the applications of the statistical test in @cite_1 . Nevertheless, this work pioneered the quest for a rigorous framework for addressing the discovery of significant itemsets.
|
{
"cite_N": [
"@cite_1",
"@cite_20",
"@cite_11"
],
"mid": [
"1511277043",
"2083991698",
"2039594635"
],
"abstract": [
"One of the more well-studied problems in data mining is the search for association rules in market basket data. Association rules are intended to identify patterns of the type: “A customer purchasing item A often also purchases item B.” Motivated partly by the goal of generalizing beyond market basket data and partly by the goal of ironing out some problems in the definition of association rules, we develop the notion of dependence rules that identify statistical dependence in both the presence and absence of items in itemsets. We propose measuring significance of dependence via the chi-squared test for independence from classical statistics. This leads to a measure that is upward-closed in the itemset lattice, enabling us to reduce the mining problem to the search for a border between dependent and independent itemsets in the lattice. We develop pruning strategies based on the closure property and thereby devise an efficient algorithm for discovering dependence rules. We demonstrate our algorithm‘s effectiveness by testing it on census data, text data (wherein we seek term dependence), and synthetic data.",
"This paper considers the framework of the so-called \"market basket problem\", in which a database of transactions is mined for the occurrence of unusually frequent item sets. In our case, \"unusually frequent\" involves estimates of the frequency of each item set divided by a baseline frequency computed as if items occurred independently. The focus is on obtaining reliable estimates of this measure of interestingness for all item sets, even item sets with relatively low frequencies. For example, in a medical database of patient histories, unusual item sets including the item \"patient death\" (or other serious adverse event) might hopefully be flagged with as few as 5 or 10 occurrences of the item set, it being unacceptable to require that item sets occur in as many as 0.1 of millions of patient reports before the data mining algorithm detects a signal. Similar considerations apply in fraud detection applications. Thus we abandon the requirement that interesting item sets must contain a relatively large fixed minimal support, and adopt a criterion based on the results of fitting an empirical Bayes model to the item set counts. The model allows us to define a 95 Bayesian lower confidence limit for the \"interestingness\" measure of every item set, whereupon the item sets can be ranked according to their empirical Bayes confidence limits. For item sets of size J > 2, we also distinguish between multi-item associations that can be explained by the observed J(J-1) 2 pairwise associations, and item sets that are significantly more frequent than their pairwise associations would suggest. Such item sets can uncover complex or synergistic mechanisms generating multi-item associations. This methodology has been applied within the U.S. Food and Drug Administration (FDA) to databases of adverse drug reaction reports and within AT&T to customer international calling histories. We also present graphical techniques for exploring and understanding the modeling results.",
"Abstract A common data mining task is the search for associations in large databases. Here we consider the search for “interestingly large” counts in a large frequency table, having millions of cells, most of which have an observed frequency of 0 or 1. We first construct a baseline or null hypothesis expected frequency for each cell, and then suggest and compare screening criteria for ranking the cell deviations of observed from expected count. A criterion based on the results of fitting an empirical Bayes model to the cell counts is recommended. An example compares these criteria for searching the FDA Spontaneous Reporting System database maintained by the Division of Pharmacovigilance and Epidemiology. In the example, each cell count is the number of reports combining one of 1,398 drugs with one of 952 adverse events (total of cell counts = 4.9 million), and the problem is to screen the drug-event combinations for possible further investigation."
]
}
|
1002.1104
|
2953200881
|
As advances in technology allow for the collection, storage, and analysis of vast amounts of data, the task of screening and assessing the significance of discovered patterns is becoming a major challenge in data mining applications. In this work, we address significance in the context of frequent itemset mining. Specifically, we develop a novel methodology to identify a meaningful support threshold s* for a dataset, such that the number of itemsets with support at least s* represents a substantial deviation from what would be expected in a random dataset with the same number of transactions and the same individual item frequencies. These itemsets can then be flagged as statistically significant with a small false discovery rate. We present extensive experimental results to substantiate the effectiveness of our methodology.
|
A common drawback of the aforementioned works is that they assess the significance of each itemset , rather than taking into account the characteristics of the dataset from which they are extracted. As argued before, if the number of itemsets considered by the analysis is large, even in a purely random dataset some of them are likely to be flagged as significant if considered in isolation. A few works attempt at accounting for the global structure of the dataset in the context of frequent itemset mining. The authors of @cite_10 propose an approach based on Markov chains to generate a random dataset that has identical transaction lengths and identical frequencies of the individual items as the given real dataset. The work suggests comparing the outcomes of a number of data mining tasks, frequent itemset mining among the others, in the real and the randomly generated datasets in order to assess whether the real datasets embody any significant global structure. However, such an assessment is carried out in a purely qualitative fashion without rigorous statistical grounding.
|
{
"cite_N": [
"@cite_10"
],
"mid": [
"2163917503"
],
"abstract": [
"The problem of assessing the significance of data mining results on high-dimensional 0-1 data sets has been studied extensively in the literature. For problems such as mining frequent sets and finding correlations, significance testing can be done by, e.g., chi-square tests, or many other methods. However, the results of such tests depend only on the specific attributes and not on the dataset as a whole. Moreover, the tests are more difficult to apply to sets of patterns or other complex results of data mining. In this paper, we consider a simple randomization technique that deals with this shortcoming. The approach consists of producing random datasets that have the same row and column margins with the given dataset, computing the results of interest on the randomized instances, and comparing them against the results on the actual data. This randomization technique can be used to assess the results of many different types of data mining algorithms, such as frequent sets, clustering, and rankings. To generate random datasets with given margins, we use variations of a Markov chain approach, which is based on a simple swap operation. We give theoretical results on the efficiency of different randomization methods, and apply the swap randomization method to several well-known datasets. Our results indicate that for some datasets the structure discovered by the data mining algorithms is a random artifact, while for other datasets the discovered structure conveys meaningful information."
]
}
|
1002.1104
|
2953200881
|
As advances in technology allow for the collection, storage, and analysis of vast amounts of data, the task of screening and assessing the significance of discovered patterns is becoming a major challenge in data mining applications. In this work, we address significance in the context of frequent itemset mining. Specifically, we develop a novel methodology to identify a meaningful support threshold s* for a dataset, such that the number of itemsets with support at least s* represents a substantial deviation from what would be expected in a random dataset with the same number of transactions and the same individual item frequencies. These itemsets can then be flagged as statistically significant with a small false discovery rate. We present extensive experimental results to substantiate the effectiveness of our methodology.
|
The problem of spurious discoveries in the mining of significant patterns is studied in @cite_24 . The paper is concerned with the discovery of significant pairs of items, where significance is measured through the @math -value, that is, the probability of occurrence of the observed support in a random dataset. Significant pairs are those whose @math -values are below a certain threshold that can be suitably chosen to bound the FWER, or to bound the FDR. The authors compare the relative power of the two metrics through experimental results, but do not provide methods to set a meaningful support threshold, which is the most prominent feature of our approach.
|
{
"cite_N": [
"@cite_24"
],
"mid": [
"1547785593"
],
"abstract": [
"The problem of spurious apparent patterns arising by chance is a fundamental one for pattern detection. Classical approaches, based on adjustments such as the Bonferroni procedure, are arguably not appropriate in a data mining context. Instead, methods based on the false discovery rate - the proportion of flagged patterns which do not represent an underlying reality - may be more relevant. We describe such procedures and illustrate their application on a marketing dataset."
]
}
|
1001.5076
|
2949804691
|
Inspired by online ad allocation, we study online stochastic packing linear programs from theoretical and practical standpoints. We first present a near-optimal online algorithm for a general class of packing linear programs which model various online resource allocation problems including online variants of routing, ad allocations, generalized assignment, and combinatorial auctions. As our main theoretical result, we prove that a simple primal-dual training-based algorithm achieves a (1 - o(1))-approximation guarantee in the random order stochastic model. This is a significant improvement over logarithmic or constant-factor approximations for the adversarial variants of the same problems (e.g. factor 1 - 1 e for online ad allocation, and m for online routing). We then focus on the online display ad allocation problem and study the efficiency and fairness of various training-based and online allocation algorithms on data sets collected from real-life display ad allocation system. Our experimental evaluation confirms the effectiveness of training-based primal-dual algorithms on real data sets, and also indicate an intrinsic trade-off between fairness and efficiency.
|
Our proof technique is similar to that of @cite_16 for the AW problem; it is based on their observation that dual variables satisfy the complemtary slackness conditions of the first @math fraction of impressions and approximately satisfy these conditions on the entire set. However, one key difference is that in the AW problem, the coefficients for variable @math in the linear program are the same in both the constraint and the objective function. That is, the contribution an impression makes to an advertisers value is identical to the amount of budget it consumes. In contrast, in the general class of packing problems that we study, these coefficients are unrelated, which complicates the proof.
|
{
"cite_N": [
"@cite_16"
],
"mid": [
"2158268067"
],
"abstract": [
"We consider the problem of a search engine trying to assign a sequence of search keywords to a set of competing bidders, each with a daily spending limit. The goal is to maximize the revenue generated by these keyword sales, bearing in mind that, as some bidders may eventually exceed their budget, not all keywords should be sold to the highest bidder. We assume that the sequence of keywords (or equivalently, of bids) is revealed on-line. Our concern will be the competitive ratio for this problem versus the off-line optimum. We extend the current literature on this problem by considering the setting where the keywords arrive in a random order. In this setting we are able to achieve a competitive ratio of 1-e under some mild, but necessary, assumptions. In contrast, it is already known that when the keywords arrive in an adversarial order, the best competitive ratio is bounded away from 1. Our algorithm is motivated by PAC learning, and proceeds in two parts: a training phase, and an exploitation phase."
]
}
|
1001.5076
|
2949804691
|
Inspired by online ad allocation, we study online stochastic packing linear programs from theoretical and practical standpoints. We first present a near-optimal online algorithm for a general class of packing linear programs which model various online resource allocation problems including online variants of routing, ad allocations, generalized assignment, and combinatorial auctions. As our main theoretical result, we prove that a simple primal-dual training-based algorithm achieves a (1 - o(1))-approximation guarantee in the random order stochastic model. This is a significant improvement over logarithmic or constant-factor approximations for the adversarial variants of the same problems (e.g. factor 1 - 1 e for online ad allocation, and m for online routing). We then focus on the online display ad allocation problem and study the efficiency and fairness of various training-based and online allocation algorithms on data sets collected from real-life display ad allocation system. Our experimental evaluation confirms the effectiveness of training-based primal-dual algorithms on real data sets, and also indicate an intrinsic trade-off between fairness and efficiency.
|
The random-order model has been considered for several problems, often called problems. The elements arriving online are often the ground set of an appropriate matroid, and the goal is to find a maximum weight independent set in the matroids; such problems include finding a maximum value set of @math elements @cite_0 , or finding a maximum spanning forest in a graph when edges appear online. Other secretary problems include finding a maximum weight set of items that fits in a Knapsack. (See @cite_9 for a survey of these and other results.) Constant-competitive algorithms are known for these problems; without additional assumptions (such as those of Theorem 1), no algorithm can achieve a competitive ratio better than @math . Specifically for the DA problem, the results of @cite_33 imply that the random-order model permits a @math -competitive algorithm even without using the free disposal property or the conditions of Theorem .
|
{
"cite_N": [
"@cite_0",
"@cite_9",
"@cite_33"
],
"mid": [
"2061418963",
"2165436315",
"2952477323"
],
"abstract": [
"In the classical secretary problem, a set S of numbers is presented to an online algorithm in random order. At any time the algorithm may stop and choose the current element, and the goal is to maximize the probability of choosing the largest element in the set. We study a variation in which the algorithm is allowed to choose k elements, and the goal is to maximize their sum. We present an algorithm whose competitive ratio is 1-O(√1 k). To our knowledge, this is the first algorithm whose competitive ratio approaches 1 as k ← ∞. As an application we solve an open problem in the theory of online auction mechanisms.",
"We present generalized secretary problems as a framework for online auctions. Elements, such as potential employees or customers, arrive one by one online. After observing the value derived from an element, but without knowing the values of future elements, the algorithm has to make an irrevocable decision whether to retain the element as part of a solution, or reject it. The way in which the secretary framework differs from traditional online algorithms is that the elements arrive in uniformly random order. Many natural online auction scenarios can be cast as generalized secretary problems, by imposing natural restrictions on the feasible sets. For many such settings, we present surprisingly strong constant factor guarantees on the expected value of solutions obtained by online algorithms. The framework is also easily augmented to take into account time-discounted revenue and incentive compatibility. We give an overview of recent results and future research directions.",
"We examine several online matching problems, with applications to Internet advertising reservation systems. Consider an edge-weighted bipartite graph G, with partite sets L, R. We develop an 8-competitive algorithm for the following secretary problem: Initially given R, and the size of L, the algorithm receives the vertices of L sequentially, in a random order. When a vertex l L is seen, all edges incident to l are revealed, together with their weights. The algorithm must immediately either match l to an available vertex of R, or decide that l will remain unmatched. Dimitrov and Plaxton show a 16-competitive algorithm for the transversal matroid secretary problem, which is the special case with weights on vertices, not edges. (Equivalently, one may assume that for each l L, the weights on all edges incident to l are identical.) We use a similar algorithm, but simplify and improve the analysis to obtain a better competitive ratio for the more general problem. Perhaps of more interest is the fact that our analysis is easily extended to obtain competitive algorithms for similar problems, such as to find disjoint sets of edges in hypergraphs where edges arrive online. We also introduce secretary problems with adversarially chosen groups. Finally, we give a 2e-competitive algorithm for the secretary problem on graphic matroids, where, with edges appearing online, the goal is to find a maximum-weight acyclic subgraph of a given graph."
]
}
|
1001.5076
|
2949804691
|
Inspired by online ad allocation, we study online stochastic packing linear programs from theoretical and practical standpoints. We first present a near-optimal online algorithm for a general class of packing linear programs which model various online resource allocation problems including online variants of routing, ad allocations, generalized assignment, and combinatorial auctions. As our main theoretical result, we prove that a simple primal-dual training-based algorithm achieves a (1 - o(1))-approximation guarantee in the random order stochastic model. This is a significant improvement over logarithmic or constant-factor approximations for the adversarial variants of the same problems (e.g. factor 1 - 1 e for online ad allocation, and m for online routing). We then focus on the online display ad allocation problem and study the efficiency and fairness of various training-based and online allocation algorithms on data sets collected from real-life display ad allocation system. Our experimental evaluation confirms the effectiveness of training-based primal-dual algorithms on real data sets, and also indicate an intrinsic trade-off between fairness and efficiency.
|
There have been recent results regarding ad allocation strategies in display advertising in hybrid settings with both contract-based advertisers and spot market advertisers @cite_13 @cite_22 . Our results in this paper may be interpreted as a class of representative bidding strategies that can be used on behalf of contract-based advertisers competing with the spot market bidders @cite_13 . There are many other interesting problems in ad serving systems related to information retrieval and data mining @cite_23 @cite_25 @cite_17 as well as various optimal caching strategies @cite_14 @cite_4 ; our focus in this paper is on online allocation problems.]
|
{
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_22",
"@cite_23",
"@cite_13",
"@cite_25",
"@cite_17"
],
"mid": [
"2168467811",
"",
"2951567589",
"",
"2158635658",
"2073448073",
"2139273454"
],
"abstract": [
"Motivated by contextual advertising systems and other web applications involving efficiency-accuracy tradeoffs, we study similarity caching. Here, a cache hit is said to occur if the requested item is similar but not necessarily equal to some cached item. We study two objectives that dictate the efficiency-accuracy tradeoff and provide our caching policies for these objectives. By conducting extensive experiments on real data we show similarity caching can significantly improve the efficiency of contextual advertising systems, with minimal impact on accuracy. Inspired by the above, we propose a simple generative model that embodies two fundamental characteristics of page requests arriving to advertising systems, namely, long-range dependences and similarities. We provide theoretical bounds on the gains of similarity caching in this model and demonstrate these gains empirically by fitting the actual data to the model.",
"",
"Display advertising has traditionally been sold via guaranteed contracts -- a guaranteed contract is a deal between a publisher and an advertiser to allocate a certain number of impressions over a certain period, for a pre-specified price per impression. However, as spot markets for display ads, such as the RightMedia Exchange, have grown in prominence, the selection of advertisements to show on a given page is increasingly being chosen based on price, using an auction. As the number of participants in the exchange grows, the price of an impressions becomes a signal of its value. This correlation between price and value means that a seller implementing the contract through bidding should offer the contract buyer a range of prices, and not just the cheapest impressions necessary to fulfill its demand. Implementing a contract using a range of prices, is akin to creating a mutual fund of advertising impressions, and requires randomized bidding . We characterize what allocations can be implemented with randomized bidding, namely those where the desired share obtained at each price is a non-increasing function of price. In addition, we provide a full characterization of when a set of campaigns are compatible and how to implement them with randomized bidding strategies.",
"",
"Motivated by the emergence of auction-based marketplaces for display ads such as the Right Media Exchange, we study the design of a bidding agent that implements a display advertising campaign by bidding in such a marketplace. The bidding agent must acquire a given number of impressions with a given target spend, when the highest external bid in the marketplace is drawn from an unknown distribution P. The quantity and spend constraints arise from the fact that display ads are usually sold on a CPM basis. We consider both the full information setting, where the winning price in each auction is announced publicly, and the partially observable setting where only the winner obtains information about the distribution; these differ in the penalty incurred by the agent while attempting to learn the distribution. We provide algorithms for both settings, and prove performance guarantees using bounds on uniform closeness from statistics, and techniques from online learning. We experimentally evaluate these algorithms: both algorithms perform very well with respect to both target quantity and spend; further, our algorithm for the partially observable case performs nearly as well as that for the fully observable setting despite the higher penalty incurred during learning.",
"Contextual advertising or Context Match (CM) refers to the placement of commercial textual advertisements within the content of a generic web page, while Sponsored Search (SS) advertising consists in placing ads on result pages from a web search engine, with ads driven by the originating query. In CM there is usually an intermediary commercial ad-network entity in charge of optimizing the ad selection with the twin goal of increasing revenue (shared between the publisher and the ad-network) and improving the user experience. With these goals in mind it is preferable to have ads relevant to the page content, rather than generic ads. The SS market developed quicker than the CM market, and most textual ads are still characterized by \"bid phrases\" representing those queries where the advertisers would like to have their ad displayed. Hence, the first technologies for CM have relied on previous solutions for SS, by simply extracting one or more phrases from the given page content, and displaying ads corresponding to searches on these phrases, in a purely syntactic approach. However, due to the vagaries of phrase extraction, and the lack of context, this approach leads to many irrelevant ads. To overcome this problem, we propose a system for contextual ad matching based on a combination of semantic and syntactic features.",
"Sponsored search systems are tasked with matching queries to relevant advertisements. The current state-of-the-art matching algorithms expand the user's query using a variety of external resources, such as Web search results. While these expansion-based algorithms are highly effective, they are largely inefficient and cannot be applied in real-time. In practice, such algorithms are applied offline to popular queries, with the results of the expensive operations cached for fast access at query time. In this paper, we describe an efficient and effective approach for matching ads against rare queries that were not processed offline. The approach builds an expanded query representation by leveraging offline processing done for related popular queries. Our experimental results show that our approach significantly improves the effectiveness of advertising on rare queries with only a negligible increase in computational cost."
]
}
|
1001.3720
|
2950718800
|
Flash memory is widely used as the secondary storage in lightweight computing devices due to its outstanding advantages over magnetic disks. Flash memory has many access characteristics different from those of magnetic disks, and how to take advantage of them is becoming an important research issue. There are two existing approaches to storing data into flash memory: page-based and log-based. The former has good performance for read operations, but poor performance for write operations. In contrast, the latter has good performance for write operations when updates are light, but poor performance for read operations. In this paper, we propose a new method of storing data, called page-differential logging, for flash-based storage systems that solves the drawbacks of the two methods. The primary characteristics of our method are: (1) writing only the difference (which we define as the page-differential) between the original page in flash memory and the up-to-date page in memory; (2) computing and writing the page-differential only once at the time the page needs to be reflected into flash memory. The former contrasts with existing page-based methods that write the whole page including both changed and unchanged parts of data or from log-based ones that keep track of the history of all the changes in a page. Our method allows existing disk-based DBMSs to be reused as flash-based DBMSs just by modifying the flash memory driver, i.e., it is DBMS-independent. Experimental results show that the proposed method improves the I O performance by 1.2 6.1 times over existing methods for the TPC-C data of approximately 1 Gbytes.
|
In page-based methods, there are two update schemes , @cite_2 ,--- ,in-place update and out-place update ,--- ,depending on whether or not the logical page is always written into the same physical page. When a logical page needs to be reflected into flash memory, the in-place update overwrites it into the specific physical page that was read , @cite_2 , but the out-place update writes it into a new physical page , @cite_18 @cite_15 .
|
{
"cite_N": [
"@cite_18",
"@cite_15",
"@cite_2"
],
"mid": [
"1980625627",
"",
"2113689712"
],
"abstract": [
"Flash memory is among the top choices for storage media in ubiquitous computing. With a strong demand of high-capacity storage devices, the usages of flash memory quickly grow beyond their original designs. The very distinct characteristics of flash memory introduce serious challenges to engineers in resolving the quick degradation of system performance and the huge demand of main-memory space for flash-memory management when high-capacity flash memory is considered. Although some brute-force solutions could be taken, such as the enlarging of management granularity for flash memory, we showed that little advantage is received when system performance is considered. This paper proposes a flexible management scheme for large-scale flash-memory storage systems. The objective is to efficiently manage high-capacity flash-memory storage systems based on the behaviors of realistic access patterns. The proposed scheme could significantly reduce the main-memory usages without noticeable performance degradation.",
"",
"Recent advances in flash media have made it an attractive alternative for data storage in a wide spectrum of computing devices, such as embedded sensors, mobile phones, PDA's, laptops, and even servers. However, flash media has many unique characteristics that make existing data management analytics algorithms designed for magnetic disks perform poorly with flash storage. For example, while random (page) reads are as fast as sequential reads, random (page) writes and in-place data updates are orders of magnitude slower than sequential writes. In this paper, we consider an important fundamental problem that would seem to be particularly challenging for flash storage: efficiently maintaining a very large (100 MBs or more) random sample of a data stream (e.g., of sensor readings). First, we show that previous algorithms such as reservoir sampling and geometric file are not readily adapted to flash. Second, we propose B-FILE, an energy-efficient abstraction for flash media to store self-expiring items, and show how a B-FILE can be used to efficiently maintain a large sample in flash. Our solution is simple, has a small (RAM) memory footprint, and is designed to cope with flash constraints in order to reduce latency and energy consumption. Third, we provide techniques to maintain biased samples with a B-FILE and to query the large sample stored in a B-FILE for a subsample of an arbitrary size. Finally, we present an evaluation with flash media that shows our techniques are several orders of magnitude faster and more energy-efficient than (flash-friendly versions of) reservoir sampling and geometric file. A key finding of our study, of potential use to many flash algorithms beyond sampling, is that \"semi-random\" writes (as defined in the paper) on flash cards are over two orders of magnitude faster and more energy-efficient than random writes."
]
}
|
1001.3720
|
2950718800
|
Flash memory is widely used as the secondary storage in lightweight computing devices due to its outstanding advantages over magnetic disks. Flash memory has many access characteristics different from those of magnetic disks, and how to take advantage of them is becoming an important research issue. There are two existing approaches to storing data into flash memory: page-based and log-based. The former has good performance for read operations, but poor performance for write operations. In contrast, the latter has good performance for write operations when updates are light, but poor performance for read operations. In this paper, we propose a new method of storing data, called page-differential logging, for flash-based storage systems that solves the drawbacks of the two methods. The primary characteristics of our method are: (1) writing only the difference (which we define as the page-differential) between the original page in flash memory and the up-to-date page in memory; (2) computing and writing the page-differential only once at the time the page needs to be reflected into flash memory. The former contrasts with existing page-based methods that write the whole page including both changed and unchanged parts of data or from log-based ones that keep track of the history of all the changes in a page. Our method allows existing disk-based DBMSs to be reused as flash-based DBMSs just by modifying the flash memory driver, i.e., it is DBMS-independent. Experimental results show that the proposed method improves the I O performance by 1.2 6.1 times over existing methods for the TPC-C data of approximately 1 Gbytes.
|
As explained in , the write operation in flash memory cannot change bits in a page to 1. Therefore, when overwriting the logical page @math that was read from the physical page @math in the block @math into the same physical page @math , we do the following four steps: (1) read all the pages in @math except @math ; (2) erase @math ; (3) write @math into @math ; (4) write all the pages read in Step ,(1) except @math in the corresponding pages in @math . The in-place update scheme suffers from severe performance problems and is rarely used in flash memory , @cite_2 because it causes an erase operation and multiple read and write operations whenever we need to reflect a logical page into flash memory.
|
{
"cite_N": [
"@cite_2"
],
"mid": [
"2113689712"
],
"abstract": [
"Recent advances in flash media have made it an attractive alternative for data storage in a wide spectrum of computing devices, such as embedded sensors, mobile phones, PDA's, laptops, and even servers. However, flash media has many unique characteristics that make existing data management analytics algorithms designed for magnetic disks perform poorly with flash storage. For example, while random (page) reads are as fast as sequential reads, random (page) writes and in-place data updates are orders of magnitude slower than sequential writes. In this paper, we consider an important fundamental problem that would seem to be particularly challenging for flash storage: efficiently maintaining a very large (100 MBs or more) random sample of a data stream (e.g., of sensor readings). First, we show that previous algorithms such as reservoir sampling and geometric file are not readily adapted to flash. Second, we propose B-FILE, an energy-efficient abstraction for flash media to store self-expiring items, and show how a B-FILE can be used to efficiently maintain a large sample in flash. Our solution is simple, has a small (RAM) memory footprint, and is designed to cope with flash constraints in order to reduce latency and energy consumption. Third, we provide techniques to maintain biased samples with a B-FILE and to query the large sample stored in a B-FILE for a subsample of an arbitrary size. Finally, we present an evaluation with flash media that shows our techniques are several orders of magnitude faster and more energy-efficient than (flash-friendly versions of) reservoir sampling and geometric file. A key finding of our study, of potential use to many flash algorithms beyond sampling, is that \"semi-random\" writes (as defined in the paper) on flash cards are over two orders of magnitude faster and more energy-efficient than random writes."
]
}
|
1001.2575
|
1588625941
|
Centralized Virtual Private Networks (VPNs) when used in distributed systems have performance constraints as all traffic must traverse through a central server. In recent years, there has been a paradigm shift towards the use of P2P in VPNs to alleviate pressure placed upon the central server by allowing participants to communicate directly with each other, relegating the server to handling session management and supporting NAT traversal using relays when necessary. Another, less common, approach uses unstructured P2P systems to remove all centralization from the VPN. These approaches currently lack the depth in security options provided by other VPN solutions, and their scalability constraints have not been well studied. In this paper, we propose and implement a novel VPN architecture, which uses a structured P2P system for peer discovery, session management, NAT traversal, and autonomic relay selection and a central server as a partially-automated public key infrastructure (PKI) via a user-friendly web interface. Our model also provides the first design and implementation of a P2P VPN with full tunneling support, whereby all non-P2P based Internet traffic routes through a trusted third party and does so in a way that is more secure than existing full tunnel techniques. To verify our model, we evaluate our reference implementation by comparing it quantitatively to other VPN technologies focusing on latency, bandwidth, and memory usage. We also discuss some of our experiences with developing, maintaining, and deploying a P2P VPN.
|
VINI @cite_15 , a network infrastructure for evaluating new protocols and services, uses OpenVPN along with Click @cite_5 to provide access from a VINI instance to outside hosts, as an ingress mechanism. OpenVPN only supports a single server and gateway per a client and does support distributed load balancing. VINI may benefit from using a VPN that uses a full tunnel model similar to ours, as it lends itself readily to interesting load balancing schemes.
|
{
"cite_N": [
"@cite_5",
"@cite_15"
],
"mid": [
"2156874421",
"2101296223"
],
"abstract": [
"Click is a new software architecture for building flexible and configurable routers. A Click router is assembled from packet processing modules called elements. Individual elements implement simple router functions like packet classification, queueing, scheduling, and interfacing with network devices. Complete configurations are built by connecting elements into a graph; packets flow along the graph's edges. Several features make individual elements more powerful and complex configurations easier to write, including pull processing, which models packet flow driven by transmitting interfaces, and flow-based router context, which helps an element locate other interesting elements.We demonstrate several working configurations, including an IP router and an Ethernet bridge. These configurations are modular---the IP router has 16 elements on the forwarding path---and easy to extend by adding additional elements, which we demonstrate with augmented configurations. On commodity PC hardware running Linux, the Click IP router can forward 64-byte packets at 73,000 packets per second, just 10 slower than Linux alone.",
"This paper describes VINI, a virtual network infrastructure that allows network researchers to evaluate their protocols and services in a realistic environment that also provides a high degree of control over network conditions. VINI allows researchers to deploy and evaluate their ideas with real routing software, traffic loads, and network events. To provide researchers flexibility in designing their experiments, VINI supports simultaneous experiments with arbitrary network topologies on a shared physical infrastructure. This paper tackles the following important design question: What set of concepts and techniques facilitate flexible, realistic, and controlled experimentation (e.g., multiple topologies and the ability to tweak routing algorithms) on a fixed physical infrastructure? We first present VINI's high-level design and the challenges of virtualizing a single network. We then present PL-VINI, an implementation of VINI on PlanetLab, running the \"Internet In a Slice\". Our evaluation of PL-VINI shows that it provides a realistic and controlled environment for evaluating new protocols and services."
]
}
|
1001.2605
|
2949159703
|
Manifold learning is a hot research topic in the field of computer science and has many applications in the real world. A main drawback of manifold learning methods is, however, that there is no explicit mappings from the input data manifold to the output embedding. This prohibits the application of manifold learning methods in many practical problems such as classification and target detection. Previously, in order to provide explicit mappings for manifold learning methods, many methods have been proposed to get an approximate explicit representation mapping with the assumption that there exists a linear projection between the high-dimensional data samples and their low-dimensional embedding. However, this linearity assumption may be too restrictive. In this paper, an explicit nonlinear mapping is proposed for manifold learning, based on the assumption that there exists a polynomial mapping between the high-dimensional data samples and their low-dimensional representations. As far as we know, this is the first time that an explicit nonlinear mapping for manifold learning is given. In particular, we apply this to the method of Locally Linear Embedding (LLE) and derive an explicit nonlinear manifold learning algorithm, named Neighborhood Preserving Polynomial Embedding (NPPE). Experimental results on both synthetic and real-world data show that the proposed mapping is much more effective in preserving the local neighborhood information and the nonlinear geometry of the high-dimensional data samples than previous work.
|
As global approaches, Isometric Feature Mapping (ISOMAP) @cite_9 @cite_13 preserves the pairwise geodesic distances among the high-dimensional data samples and their low-dimensional representations. Hessian Eigenmaps (HLLE) @cite_25 extends ISOMAP to more general cases where the set of intrinsic degrees of freedom may be non-convex. In Riemannian Manifold Learning (RML) @cite_29 , the coordinates of data samples in the tangential space are preserved to be their low-dimensional representations.
|
{
"cite_N": [
"@cite_9",
"@cite_29",
"@cite_13",
"@cite_25"
],
"mid": [
"2001141328",
"2125003829",
"2156287497",
"2156838815"
],
"abstract": [
"Scientists working with large volumes of high-dimensional data, such as global climate patterns, stellar spectra, or human gene distributions, regularly confront the problem of dimensionality reduction: finding meaningful low-dimensional structures hidden in their high-dimensional observations. The human brain confronts the same problem in everyday perception, extracting from its high-dimensional sensory inputs—30,000 auditory nerve fibers or 106 optic nerve fibers—a manageably small number of perceptually relevant features. Here we describe an approach to solving dimensionality reduction problems that uses easily measured local metric information to learn the underlying global geometry of a data set. Unlike classical techniques such as principal component analysis (PCA) and multidimensional scaling (MDS), our approach is capable of discovering the nonlinear degrees of freedom that underlie complex natural observations, such as human handwriting or images of a face under different viewing conditions. In contrast to previous algorithms for nonlinear dimensionality reduction, ours efficiently computes a globally optimal solution, and, for an important class of data manifolds, is guaranteed to converge asymptotically to the true structure.",
"Recently, manifold learning has been widely exploited in pattern recognition, data analysis, and machine learning. This paper presents a novel framework, called Riemannian manifold learning (RML), based on the assumption that the input high-dimensional data lie on an intrinsically low-dimensional Riemannian manifold. The main idea is to formulate the dimensionality reduction problem as a classical problem in Riemannian geometry, that is, how to construct coordinate charts for a given Riemannian manifold? We implement the Riemannian normal coordinate chart, which has been the most widely used in Riemannian geometry, for a set of unorganized data points. First, two input parameters (the neighborhood size k and the intrinsic dimension d) are estimated based on an efficient simplicial reconstruction of the underlying manifold. Then, the normal coordinates are computed to map the input high-dimensional data into a low- dimensional space. Experiments on synthetic data, as well as real-world images, demonstrate that our algorithm can learn intrinsic geometric structures of the data, preserve radial geodesic distances, and yield regular embeddings.",
"Recently proposed algorithms for nonlinear dimensionality reduction fall broadly into two categories which have different advantages and disadvantages: global (Isomap [1]), and local (Locally Linear Embedding [2], Laplacian Eigenmaps [3]). We present two variants of Isomap which combine the advantages of the global approach with what have previously been exclusive advantages of local methods: computational sparsity and the ability to invert conformal maps.",
"Abstract We describe a method for recovering the underlying parametrization of scattered data (mi) lying on a manifold M embedded in high-dimensional Euclidean space. The method, Hessian-based locally linear embedding, derives from a conceptual framework of local isometry in which the manifold M, viewed as a Riemannian submanifold of the ambient Euclidean space ℝn, is locally isometric to an open, connected subset Θ of Euclidean space ℝd. Because Θ does not have to be convex, this framework is able to handle a significantly wider class of situations than the original ISOMAP algorithm. The theoretical framework revolves around a quadratic form ℋ(f) = ∫M ∥Hf(m)∥dm defined on functions f : M ↦ ℝ. Here Hf denotes the Hessian of f, and ℋ(f) averages the Frobenius norm of the Hessian over M. To define the Hessian, we use orthogonal coordinates on the tangent planes of M. The key observation is that, if M truly is locally isometric to an open, connected subset of ℝd, then ℋ(f) has a (d + 1)-dimensional null space consisting of the constant functions and a d-dimensional space of functions spanned by the original isometric coordinates. Hence, the isometric coordinates can be recovered up to a linear isometry. Our method may be viewed as a modification of locally linear embedding and our theoretical framework as a modification of the Laplacian eigenmaps framework, where we substitute a quadratic form based on the Hessian in place of one based on the Laplacian."
]
}
|
1001.2767
|
2950272118
|
A scheme that publishes aggregate information about sensitive data must resolve the trade-off between utility to information consumers and privacy of the database participants. Differential privacy is a well-established definition of privacy--this is a universal guarantee against all attackers, whatever their side-information or intent. In this paper, we present a universal treatment of utility based on the standard minimax rule from decision theory (in contrast to the utility model in, which is Bayesian). In our model, information consumers are minimax (risk-averse) agents, each possessing some side-information about the query, and each endowed with a loss-function which models their tolerance to inaccuracies. Further, information consumers are rational in the sense that they actively combine information from the mechanism with their side-information in a way that minimizes their loss. Under this assumption of rational behavior, we show that for every fixed count query, a certain geometric mechanism is universally optimal for all minimax information consumers. Additionally, our solution makes it possible to release query results at multiple levels of privacy in a collusion-resistant manner.
|
A recent thorough survey of the state of the field of differential privacy is given in @cite_14 . Dinur and Nissim @cite_23 , @cite_19 establish upper-bounds on the number of queries that can be answered with reasonable accuracy. Most of the differential privacy literature circumvents these impossibility results by focusing on interactive models where a mechanism supplies answers to only a sub-linear (in @math ) number of queries. Count queries (e.g. @cite_23 @cite_15 ) and more general queries (e.g. @cite_8 @cite_10 ) have been studied from this perspective.
|
{
"cite_N": [
"@cite_14",
"@cite_8",
"@cite_19",
"@cite_23",
"@cite_15",
"@cite_10"
],
"mid": [
"",
"2517104773",
"2120806354",
"2110868467",
"44899178",
"2101771965"
],
"abstract": [
"",
"We continue a line of research initiated in [10, 11] on privacy-preserving statistical databases. Consider a trusted server that holds a database of sensitive information. Given a query function f mapping databases to reals, the so-called true answer is the result of applying f to the database. To protect privacy, the true answer is perturbed by the addition of random noise generated according to a carefully chosen distribution, and this response, the true answer plus noise, is returned to the user. Previous work focused on the case of noisy sums, in which f = Σ i g(x i ), where x i denotes the ith row of the database and g maps database rows to [0,1]. We extend the study to general functions f, proving that privacy can be preserved by calibrating the standard deviation of the noise according to the sensitivity of the function f. Roughly speaking, this is the amount that any single argument to f can change its output. The new analysis shows that for several particular applications substantially less noise is needed than was previously understood to be the case. The first step is a very clean characterization of privacy in terms of indistinguishability of transcripts. Additionally, we obtain separation results showing the increased value of interactive sanitization mechanisms over non-interactive.",
"This work is at theintersection of two lines of research. One line, initiated by Dinurand Nissim, investigates the price, in accuracy, of protecting privacy in a statistical database. The second, growing from an extensive literature on compressed sensing (see in particular the work of Donoho and collaborators [4,7,13,11])and explicitly connected to error-correcting codes by Candes and Tao ([4]; see also [5,3]), is in the use of linearprogramming for error correction. Our principal result is the discovery of a sharp threshhold ρ*∠ 0.239, so that if ρ In the context of privacy-preserving datamining our results say thatany privacy mechanism, interactive or non-interactive, providingreasonably accurate answers to a 0.761 fraction of randomly generated weighted subset sum queries, and arbitrary answers on the remaining 0.239 fraction, is blatantly non-private.",
"We examine the tradeoff between privacy and usability of statistical databases. We model a statistical database by an n-bit string d 1 ,..,d n , with a query being a subset q ⊆ [n] to be answered by Σ ieq d i . Our main result is a polynomial reconstruction algorithm of data from noisy (perturbed) subset sums. Applying this reconstruction algorithm to statistical databases we show that in order to achieve privacy one has to add perturbation of magnitude (Ω√n). That is, smaller perturbation always results in a strong violation of privacy. We show that this result is tight by exemplifying access algorithms for statistical databases that preserve privacy while adding perturbation of magnitude O(√n).For time-T bounded adversaries we demonstrate a privacypreserving access algorithm whose perturbation magnitude is ≈ √T.",
"In a recent paper Dinur and Nissim considered a statistical database in which a trusted database administrator monitors queries and introduces noise to the responses with the goal of maintaining data privacy [5]. Under a rigorous definition of breach of privacy, Dinur and Nissim proved that unless the total number of queries is sub-linear in the size of the database, a substantial amount of noise is required to avoid a breach, rendering the database almost useless.",
"We introduce a new, generic framework for private data analysis.The goal of private data analysis is to release aggregate information about a data set while protecting the privacy of the individuals whose information the data set contains.Our framework allows one to release functions f of the data withinstance-based additive noise. That is, the noise magnitude is determined not only by the function we want to release, but also bythe database itself. One of the challenges is to ensure that the noise magnitude does not leak information about the database. To address that, we calibrate the noise magnitude to the smoothsensitivity of f on the database x --- a measure of variabilityof f in the neighborhood of the instance x. The new frameworkgreatly expands the applicability of output perturbation, a technique for protecting individuals' privacy by adding a smallamount of random noise to the released statistics. To our knowledge, this is the first formal analysis of the effect of instance-basednoise in the context of data privacy. Our framework raises many interesting algorithmic questions. Namely,to apply the framework one must compute or approximate the smoothsensitivity of f on x. We show how to do this efficiently for several different functions, including the median and the cost ofthe minimum spanning tree. We also give a generic procedure based on sampling that allows one to release f(x) accurately on manydatabases x. This procedure is applicable even when no efficient algorithm for approximating smooth sensitivity of f is known orwhen f is given as a black box. We illustrate the procedure by applying it to k-SED (k-means) clustering and learning mixtures of Gaussians."
]
}
|
1001.2767
|
2950272118
|
A scheme that publishes aggregate information about sensitive data must resolve the trade-off between utility to information consumers and privacy of the database participants. Differential privacy is a well-established definition of privacy--this is a universal guarantee against all attackers, whatever their side-information or intent. In this paper, we present a universal treatment of utility based on the standard minimax rule from decision theory (in contrast to the utility model in, which is Bayesian). In our model, information consumers are minimax (risk-averse) agents, each possessing some side-information about the query, and each endowed with a loss-function which models their tolerance to inaccuracies. Further, information consumers are rational in the sense that they actively combine information from the mechanism with their side-information in a way that minimizes their loss. Under this assumption of rational behavior, we show that for every fixed count query, a certain geometric mechanism is universally optimal for all minimax information consumers. Additionally, our solution makes it possible to release query results at multiple levels of privacy in a collusion-resistant manner.
|
@cite_1 focus attention to count queries that lie in a restricted class; they obtain non-interactive mechanisms that provide simultaneous good accuracy (in terms of worst-case error) for all count queries from a class with polynomial VC dimension. @cite_9 give further results for privately learning hypotheses from a given class.
|
{
"cite_N": [
"@cite_9",
"@cite_1"
],
"mid": [
"2245160765",
"2169570643"
],
"abstract": [
"Learning problems form an important category of computational tasks that generalizes many of the computations researchers apply to large real-life data sets. We ask, What concept classes can be learned privately, namely, by an algorithm whose output does not depend too heavily on any one input or specific training example? More precisely, we investigate learning algorithms that satisfy differential privacy, a notion that provides strong confidentiality guarantees in contexts where aggregate information is released about a database containing sensitive information about individuals. Our goal is a broad understanding of the resources required for private learning in terms of samples, computation time, and interaction. We demonstrate that, ignoring computational constraints, it is possible to privately agnostically learn any concept class using a sample size approximately logarithmic in the cardinality of the concept class. Therefore, almost anything learnable is learnable privately: specifically, if a concept class is learnable by a (nonprivate) algorithm with polynomial sample complexity and output size, then it can be learned privately using a polynomial number of samples. We also present a computationally efficient private probabilistically approximately correct learner for the class of parity functions. This result dispels the similarity between learning with noise and private learning (both must be robust to small changes in inputs), since parity is thought to be very hard to learn given random classification noise. Local (or randomized response) algorithms are a practical class of private algorithms that have received extensive investigation. We provide a precise characterization of local private learning algorithms. We show that a concept class is learnable by a local algorithm if and only if it is learnable in the statistical query (SQ) model. Therefore, for local private learning algorithms, the similarity to learning with noise is stronger: local learning is equivalent to SQ learning, and SQ algorithms include most known noise-tolerant learning algorithms. Finally, we present a separation between the power of interactive and noninteractive local learning algorithms. Because of the equivalence to SQ learning, this result also separates adaptive and nonadaptive SQ learning.",
"We demonstrate that, ignoring computational constraints, it is possible to release privacy-preserving databases that are useful for all queries over a discretized domain from any given concept class with polynomial VC-dimension. We show a new lower bound for releasing databases that are useful for halfspace queries over a continuous domain. Despite this, we give a privacy-preserving polynomial time algorithm that releases information useful for all halfspace queries, for a slightly relaxed definition of usefulness. Inspired by learning theory, we introduce a new notion of data privacy, which we call distributional privacy, and show that it is strictly stronger than the prevailing privacy notion, differential privacy."
]
}
|
1001.2767
|
2950272118
|
A scheme that publishes aggregate information about sensitive data must resolve the trade-off between utility to information consumers and privacy of the database participants. Differential privacy is a well-established definition of privacy--this is a universal guarantee against all attackers, whatever their side-information or intent. In this paper, we present a universal treatment of utility based on the standard minimax rule from decision theory (in contrast to the utility model in, which is Bayesian). In our model, information consumers are minimax (risk-averse) agents, each possessing some side-information about the query, and each endowed with a loss-function which models their tolerance to inaccuracies. Further, information consumers are rational in the sense that they actively combine information from the mechanism with their side-information in a way that minimizes their loss. Under this assumption of rational behavior, we show that for every fixed count query, a certain geometric mechanism is universally optimal for all minimax information consumers. Additionally, our solution makes it possible to release query results at multiple levels of privacy in a collusion-resistant manner.
|
Our formulation of the multiple privacy levels is similar to @cite_3 . However, they use random output perturbations for preserving privacy, and do not give formal guarantees about differential privacy.
|
{
"cite_N": [
"@cite_3"
],
"mid": [
"2109642764"
],
"abstract": [
"Random perturbation is a popular method of computing anonymized data for privacy preserving data mining. It is simple to apply, ensures strong privacy protection, and permits effective mining of a large variety of data patterns. However, all the existing studies with good privacy guarantees focus on perturbation at a single privacy level. Namely, a fixed degree of privacy protection is imposed on all anonymized data released by the data holder. This drawback seriously limits the applicability of random perturbation in scenarios where the holder has numerous recipients to which different privacy levels apply. Motivated by this, we study the problem of multi-level perturbation, whose objective is to release multiple versions of a dataset anonymized at different privacy levels. The challenge is that various recipients may collude by sharing their data to infer privacy beyond their permitted levels. Our solution overcomes this obstacle, and achieves two crucial properties. First, collusion is useless, meaning that the colluding recipients cannot learn anything more than what the most trustable recipient (among the colluding recipients) already knows alone. Second, the data each recipient receives can be regarded (and hence, analyzed in the same way) as the output of conventional uniform perturbation. Besides its solid theoretical foundation, the proposed technique is both space economical and computationally efficient. It requires O (n+m) expected space, and produces a new anonymized version in O (n + log m) expected time, where n is the cardinality of the original dataset, and m the number of versions released previously. Both bounds are optimal under the realistic assumption that n » m."
]
}
|
1001.3178
|
1527824420
|
Based on a stochastic geometry framework, we establish an analysis of the multi-hop spatial reuse aloha protocol (MSR-Aloha) in ad hoc networks. We compare MSR-Aloha to a simple routing strategy, where a node selects the next relay of the treated packet as to be its nearest receiver with a forward progress toward the final destination (NFP). In addition, performance gains achieved by employing adaptive antenna array systems are quantified in this paper. We derive a tight upper bound on the spatial density of progress of MSR-Aloha. Our analytical results demonstrate that the spatial density of progress scales as the square root of the density of users, and the optimal contention density (that maximizes the spatial density of progress) is independent of the density of users. These two facts are consistent with the observations of , established through an analytical lower bound and through simulations.
|
The idea of designing routing protocols with the notion of progress is first introduced in @cite_3 , where the authors propose the most forward within radius (MFR) routing. In MFR routing, an emitter selects its next relay, among the nodes within some given range from it, to be the nearest receiver to the destination. @cite_2 propose the more sophisticated selection rule of the MSR-Aloha scheme described above. However, in their analytical framework, they find this selection rule to be difficult to manipulate, and so they apply some modifications to it. These modifications will be discussed and compared to our framework in the next sections. In @cite_9 , the authors propose and analyze the longest edge routing (LER), which applies a similar selection rule as MSR-Aloha. MSR-Aloha and LER differ in one key aspect that is that LER does not consider the direction of the intended destination, which is a challenging analytical aspect. It should be noted that all works cited do not consider the use of adaptive antenna array systems, which is one of the new aspects proposed in this work. Finally, practical implementation issues and complete simulation packages of MSR-Aloha routing are considered and detailed in @cite_2 @cite_0 .
|
{
"cite_N": [
"@cite_0",
"@cite_9",
"@cite_3",
"@cite_2"
],
"mid": [
"2136316858",
"2043288730",
"",
"2132987440"
],
"abstract": [
"This paper is meant to be an illustration of the use of stochastic geometry for analyzing the performance of routing in large wireless ad hoc (mobile or mesh) networks. In classical routing strategies used in such networks, packets are transmitted on a pre-defined route that is usually obtained by a shortest-path routing protocol. In this paper we review some recent ideas concerning a new routing technique which is opportunistic in the sense that each packet at each hop on its (specific) route from an origin to a destination takes advantage of the actual pattern of nodes that captured its recent (re)transmission in order to choose the next relay. The paper focuses both on the distributed algorithms allowing such a routing technique to work and on the evaluation of the gain in performance it brings compared to classical mechanisms. On the algorithmic side, we show that it is possible to implement this opportunistic technique in such a way that the current transmitter of a given packet does not need to know its next relay a priori, but the nodes that capture this transmission (if any) perform a self-selection procedure to choose the packet relay node and acknowledge the transmitter. We also show that this routing technique works well with various medium access protocols (such as Aloha, CSMA, TDMA). Finally, we show that the above relay self-selection procedure can be optimized in the sense that it is the node that optimizes some given utility criterion (e.g. minimize the remaining distance to the final destination), which is chosen as the relay. The performance evaluation part is based on stochastic geometry and combines simulation as analytical models. The main result is that such opportunistic schemes very significantly outperform classical routing schemes when properly optimized and provided at least a small number of nodes in the network know their geographical positions exactly.",
"The multihop spatial reuse Aloha (MSR-Aloha) protocol was recently introduced by Baccelli et aL, where each transmitter selects the receiver among its feasible next hops that maximizes the forward progress of the head of line packet towards its final destination. They identify the optimal medium access probability (MAP) that maximizes the spatial density of progress, defined as the product of the spatial intensity of attempted transmissions times the average per-hop progress of each packet towards its destination. We propose a variant called longest edge routing where each transmitter selects its longest feasible edge, and then identifies a packet in its backlog whose next hop is the associated receiver. The main contribution of this work (and of Baccelli et aL) is the use of stochastic geometry to identify the optimal MAP and the corresponding optimal spatial density of progress.",
"",
"An Aloha-type access control mechanism for large mobile, multihop, wireless networks is defined and analyzed. This access scheme is designed for the multihop context, where it is important to find a compromise between the spatial density of communications and the range of each transmission. More precisely, the analysis aims at optimizing the product of the number of simultaneously successful transmissions per unit of space (spatial reuse) by the average range of each transmission. The optimization is obtained via an averaging over all Poisson configurations for the location of interfering mobiles, where an exact evaluation of signal over noise ratio is possible. The main mathematical tools stem from stochastic geometry and are spatial versions of the so-called additive and max shot noise processes. The resulting medium access control (MAC) protocol exhibits some interesting properties. First, it can be implemented in a decentralized way provided some local geographic information is available to the mobiles. In addition, its transport capacity is proportional to the square root of the density of mobiles which is the upper bound of Gupta and Kumar. Finally, this protocol is self-adapting to the node density and it does not require prior knowledge of this density."
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.