aid
stringlengths 9
15
| mid
stringlengths 7
10
| abstract
stringlengths 78
2.56k
| related_work
stringlengths 92
1.77k
| ref_abstract
dict |
---|---|---|---|---|
1108.3915
|
1782028905
|
Sharing data from various sources and of diverse kinds, and fusing them together for sophisticated analytics and mash-up applications are emerging trends, and are prerequisites for grand visions such as that of cyber-physical systems enabled smart cities. Cloud infrastructure can enable such data sharing both because it can scale easily to an arbitrary volume of data and computation needs on demand, as well as because of natural collocation of diverse such data sets within the infrastructure. However, in order to convince data owners that their data are well protected while being shared among cloud users, the cloud platform needs to provide flexible mechanisms for the users to express the constraints (access rules) subject to which the data should be shared, and likewise, enforce them effectively. We study a comprehensive set of practical scenarios where data sharing needs to be enforced by methods such as aggregation, windowed frame, value constrains, etc., and observe that existing basic access control mechanisms do not provide adequate flexibility to enable effective data sharing in a secure and controlled manner. In this paper, we thus propose a framework for cloud that extends popular XACML model significantly by integrating flexible access control decisions and data access in a seamless fashion. We have prototyped the framework and deployed it on commercial cloud environment for experimental runs to test the efficacy of our approach and evaluate the performance of the implemented prototype.
|
Time-series data --- similar to those considered in our paper --- could arrive at the system in continuous streams, for which relational databases such as MySQL and Postgresql are not ideal. Aurora @cite_11 is a popular data stream management system that addresses limitations of relational databases when it comes to stream data. @cite_20 @cite_7 are among the first to propose a model and implementation of access control for data streams based on Aurora. The model supports four access scenarios: column-based, value-based, general window and sliding window. Our framework supports all of these scenarios for on-demand queries over archival databases. The extension to eXACML that deals with continuous queries over stream databases is left for future work.
|
{
"cite_N": [
"@cite_7",
"@cite_20",
"@cite_11"
],
"mid": [
"2069543922",
"1580709729",
"2149576945"
],
"abstract": [
"Access control is an important component of any computational system. However, it is only recently that mechanisms to guard against unauthorized access for streaming data have been proposed. In this paper, we study how to enforce the role-based access control model proposed by us in [5]. We design a set of novel secure operators, that basically filter out tuples attributes from results of the corresponding (non-secure) operators that are not accessible according to the specified access control policies. We further develop an access control mechanism to enforce the access control policies based on these operators. We show that our method is secure according to the specified policies.",
"Many data stream processing systems are increasingly being used to support applications that handle sensitive information, such as credit card numbers and locations of soldiers in battleground [1,2,3,6]. These data have to be protected from unauthorized accesses. However, existing access control models and mechanisms cannot be adequately adopted on data streams. In this paper, we propose a novel access control model for data streams based on the Aurora data model [2]. Our access control model is role-based and has the following components. Objects to be protected are essentially views (or rather queries) over data streams. We also define two types of privileges - Read privilege for operations such as Filter, Map, BSort, and a set of aggregate privileges for operations such as Min, Max, Count, Avg and Sum. The model also allows the specification of temporal constraints either to limit access to data during a given time bound or to constraint aggregate operations over the data within a specified time window. In the paper, we present the access control model and its formal semantics.",
"Abstract.This paper describes the basic processing model and architecture of Aurora, a new system to manage data streams for monitoring applications. Monitoring applications differ substantially from conventional business data processing. The fact that a software system must process and react to continual inputs from many sources (e.g., sensors) rather than from human operators requires one to rethink the fundamental architecture of a DBMS for this application area. In this paper, we present Aurora, a new DBMS currently under construction at Brandeis University, Brown University, and M.I.T. We first provide an overview of the basic Aurora model and architecture and then describe in detail a stream-oriented set of operators."
]
}
|
1108.2664
|
2146804482
|
We present new and improved approximation and FPT algorithms for computing rooted and unrooted maximum agreement forests (MAFs) of pairs of phylogenetic trees. Their sizes correspond to the subtree-prune-and-regraft distances and the tree-bisection-and-reconnection distances of the trees, respectively. We also provide the first bounded search tree FPT algorithm for computing rooted maximum acyclic agreement forests (MAAFs) of pairs of phylogenetic trees, whose sizes are the hybridization numbers of these pairs of trees. These distance metrics are essential tools for understanding reticulate evolution.
|
Numerous heuristic approaches for computing SPR distances have also been proposed. LatTrans by Hallet and Lagergen @cite_19 models lateral gene transfer events by a restricted version of rooted SPR operations, considering two ways in which the trees can differ. It computes the exact distance under this restricted metric in @math time. HorizStory by Macleod et al. @cite_3 supports multifurcating trees but does not consider SPR operations where the pruned subtree contains more than one leaf. EEEP by Beiko and Hamilton @cite_0 performs a breadth-first SPR search on a rooted start tree but performs unrooted comparisons between the explored trees and an unrooted reference tree. The distance returned is not guaranteed to be exact, due to optimizations and heuristics that limit the scope of the search, although EEEP provides options to compute the exact unrooted SPR distance with no nontrivial bound on the running time. More recently, RiataHGT by Nakhleh et al. @cite_33 computes an approximation of the SPR distance between rooted multifurcating trees in polynomial time.
|
{
"cite_N": [
"@cite_0",
"@cite_19",
"@cite_33",
"@cite_3"
],
"mid": [
"1799812931",
"1992584622",
"1776424232",
"1652298321"
],
"abstract": [
"Background Lateral genetic transfer can lead to disagreements among phylogenetic trees comprising sequences from the same set of taxa. Where topological discordance is thought to have arisen through genetic transfer events, tree comparisons can be used to identify the lineages that may have shared genetic information. An 'edit path' of one or more transfer events can be represented with a series of subtree prune and regraft (SPR) operations, but finding the optimal such set of operations is NP-hard for comparisons between rooted trees, and may be so for unrooted trees as well.",
"This paper develops a model for lateral gene transfer events (a.k.a. horizontal gene transfer events) between a set of gene trees T 1 , T 2 , …, T k and a species tree S. To the best of our knowledge, this model possesses a higher degree of biological and mathematical soundness than any other model proposed in the literature. Among other biological considerations, the model respects the partial order of evolution implied by S. Within our model, we identify an activity parameter that measures the number of genes that are allowed to be simultaneously active in the genome of a taxa and show that finding the most parsimonious scenario that reconciles the disagreeing gene trees with the species tree is doable in polynomial time when the activity level and number of transfers are small, but intractable in general. To the best of our knowledge, all other models proposed in the literature assume implicitly that the activity is one. Finally, using a dataset of bacterial gene sequences from [4], our implementations found 5 optimal scenarios; one of which is the scenario proposed by the authors in [4].",
"Horizontal gene transfer (HGT) plays a major role in microbial genome diversification, and is claimed to be rampant among various groups of genes in bacteria. Further, HGT is a major confounding factor for any attempt to reconstruct bacterial phylogenies. As a result, detecting and reconstructing HGT events in groups of organisms has become a major endeavor in biology. The problem of detecting HGT events based on incongruence between a species tree and a gene tree is computationally very hard (NP-hard). Efficient algorithms exist for solving restricted cases of the problem. We propose RIATA-HGT, the first polynomial-time heuristic to handle all HGT scenarios, without any restrictions. The method accurately infers HGT events based on analyzing incongruence among species and gene trees. Empirical performance of the method on synthetic and biological data is outstanding. Being a heuristic, RIATA-HGT may overestimate the optimal number of HGT events; empirical performance, however, shows that such overestimation is very mild. We have implemented our method and run it on biological and synthetic data. The results we obtained demonstrate very high accuracy of the method. Current version of RIATA-HGT uses the PAUP tool, and we are in the process of implementing a stand-alone version, with a graphical user interface, which will be made public. The tool, in its current implementation, is available from the authors upon request.",
"Background When organismal phylogenies based on sequences of single marker genes are poorly resolved, a logical approach is to add more markers, on the assumption that weak but congruent phylogenetic signal will be reinforced in such multigene trees. Such approaches are valid only when the several markers indeed have identical phylogenies, an issue which many multigene methods (such as the use of concatenated gene sequences or the assembly of supertrees) do not directly address. Indeed, even when the true history is a mixture of vertical descent for some genes and lateral gene transfer (LGT) for others, such methods produce unique topologies."
]
}
|
1108.2664
|
2146804482
|
We present new and improved approximation and FPT algorithms for computing rooted and unrooted maximum agreement forests (MAFs) of pairs of phylogenetic trees. Their sizes correspond to the subtree-prune-and-regraft distances and the tree-bisection-and-reconnection distances of the trees, respectively. We also provide the first bounded search tree FPT algorithm for computing rooted maximum acyclic agreement forests (MAAFs) of pairs of phylogenetic trees, whose sizes are the hybridization numbers of these pairs of trees. These distance metrics are essential tools for understanding reticulate evolution.
|
Two algorithms for computing rooted SPR distances, SPRdist @cite_30 and TreeSAT @cite_28 , express the problem of computing maximum agreement forests as an integer linear program (ILP) and a satisfiability problem (SAT), respectively, and employ efficient ILP and SAT solvers to obtain a solution. SPRdist has been shown to outperform EEEP and Lattrans @cite_30 . Although such algorithms draw on the close scrutiny that has been applied to these problems, experiments show that these algorithms cannot compete with the rooted SPR algorithm presented in this paper @cite_4 .
|
{
"cite_N": [
"@cite_30",
"@cite_28",
"@cite_4"
],
"mid": [
"2145171033",
"1558959051",
"1582613544"
],
"abstract": [
"Motivation: Subtree prune and regraft (SPR) is one kind of tree rearrangements that has seen applications in solving several computational biology problems. The minimum number of rooted SPR (rSPR) operations needed to transform one rooted binary tree to another is called the rSPR distance between the two trees. Computing the rSPR distance has been actively studied in recent years. Currently, there is a lack of practical software tools for computing the rSPR distance for relatively large trees with large rSPR distance. Results: In this article, we present a simple and practical method that computes the exact rSPR distance with integer linear programming. By applying this new method on several simulated and real biological datasets, we show that our new method outperforms existing software tools in term of accuracy and ef.ciency. Our experimental results indicate that our method can compute the exact rSPR distance for many large trees with large rSPR distance. Availability: A software tool, SPRDist, is available for download from the web page: http: www.engr.uconn.edu ywu. Contact: [email protected]",
"We develop techniques to calculate important measures in evolutionary biology by encoding to CNF formulas and using powerful SAT solvers. Comparing evolutionary trees is a necessary step in tree reconstruction algorithms, locating recombination and lateral gene transfer, and in analyzing and visualizing sets of trees. We focus on two popular comparison measures for trees: the hybridization number and the rooted subtree-prune-and-regraft (rSPR) distance. Both have recently been shown to be NP-hard, and effcient algorithms are needed to compute and approximate these measures. We encode these as a Boolean formula such that two trees have hybridization number k (or rSPR distance k) if and only if the corresponding formula is satisfiable. We use state-of-the-art SAT solvers to determine if the formula encoding the measure has a satisfying assignment. Our encoding also provides a rich source of real-world SAT instances, and we include a comparison of several recent solvers (minisat, adaptg2wsat, novelty+p, Walksat, March KS and SATzilla).",
"We improve on earlier FPT algorithms for computing a rooted maximum agreement forest (MAF) or a maximum acyclic agreement forest (MAAF) of a pair of phylogenetic trees. Their sizes give the subtree-prune-and-regraft (SPR) distance and the hybridization number of the trees, respectively. We introduce new branching rules that reduce the running time of the algorithms from O(3kn) and O(3kn logn) to O(2.42kn) and O(2.42kn logn), respectively. In practice, the speed up may be much more than predicted by the worst-case analysis. We confirm this intuition experimentally by computing MAFs for simulated trees and trees inferred from protein sequence data. We show that our algorithm is orders of magnitude faster and can handle much larger trees and SPR distances than the best previous methods, treeSAT and sprdist."
]
}
|
1108.1966
|
1609343823
|
The usefulness of annotated corpora is greatly increased if there is an associated tool that can allow various kinds of operations to be performed in a simple way. Different kinds of annotation frameworks and many query languages for them have been proposed, including some to deal with multiple layers of annotation. We present here an easy to learn query language for a particular kind of annotation framework based on 'threaded trees', which are somewhere between the complete order of a tree and the anarchy of a graph. Through 'typed' threads, they can allow multiple levels of annotation in the same document. Our language has a simple, intuitive and concise syntax and high expressive power. It allows not only to search for complicated patterns with short queries but also allows data manipulation and specification of arbitrary return values. Many of the commonly used tasks that otherwise require writing programs, can be performed with one or more queries. We compare the language with some others and try to evaluate it.
|
had compared some of the query languages available (at that time) for graph based annotation frameworks. These included Emu and the MATE query language. They then proposed their own query language for annotation graphs. This language used path patterns and abbreviatory devices to provide a convenient way to express a wide range of queries. This language also exploited the quasi-linearity of annotation graphs by partitioning the precedence relation to allow efficient temporal indexing of the graphs. Another such survey was by Lai and Bird , where the authors considered TigerSearch, CorpusSearch, NiteQL, Tgrep2, Emu and LPath @cite_1 @cite_5 . From this study, the authors tried to derive the requirements that a good tree query language should satisfy.
|
{
"cite_N": [
"@cite_5",
"@cite_1"
],
"mid": [
"2147368736",
"1547865268"
],
"abstract": [
"Linguistic research and natural language processing employ large repositories of ordered trees. XML, a standard ordered tree model, and XPath, its associated language, are natural choices for linguistic data and queries. However, several important expressive features required for linguistic queries are missing or hard to express in XPath. In this paper, we motivate and illustrate these features with a variety of linguistic queries. Then we propose extensions to XPath to support linguistic queries, and design an efficient query engine based on a novel labeling scheme. Experiments demonstrate that our language is not only sufficiently expressive for linguistic trees but also efficient for practical usage.",
"Linguistic research and language technology development employ large repositories of ordered trees. XML, a standard ordered tree model, and XPath, its associated language, are natural choices for linguistic data storage and queries. However, several important expressive features required for linguistic queries are missing in XPath. In this paper, we motivate and illustrate these features with a variety of linguistic queries. Then we define extensions to XPath which support linguistic tree queries, and describe an efficient query engine based on a novel labeling scheme. Experiments demonstrate that our language is not only sufficiently expressive for linguistic trees but also efficient for practical usage."
]
}
|
1108.1636
|
2952835466
|
Manifold learning is a hot research topic in the field of computer science. A crucial issue with current manifold learning methods is that they lack a natural quantitative measure to assess the quality of learned embeddings, which greatly limits their applications to real-world problems. In this paper, a new embedding quality assessment method for manifold learning, named as Normalization Independent Embedding Quality Assessment (NIEQA), is proposed. Compared with current assessment methods which are limited to isometric embeddings, the NIEQA method has a much larger application range due to two features. First, it is based on a new measure which can effectively evaluate how well local neighborhood geometry is preserved under normalization, hence it can be applied to both isometric and normalized embeddings. Second, it can provide both local and global evaluations to output an overall assessment. Therefore, NIEQA can serve as a natural tool in model selection and evaluation tasks for manifold learning. Experimental results on benchmark data sets validate the effectiveness of the proposed method.
|
An @math close to zero suggests a faithful embedding. Reported experimental results show that the PM method provides good estimation of embedding quality for isometric methods such as ISOMAP. However, as pointed out by the authors, PM is not suitable for normalized embedding since the geometric structure of every local neighborhood is distorted by normalization. Although a modified version of PM is proposed in @cite_6 , which eliminates global scaling of each neighborhood, it still can not address the issue of sperate scaling of coordinates in the low-dimensional embedding.
|
{
"cite_N": [
"@cite_6"
],
"mid": [
"2090679187"
],
"abstract": [
"We present the Procrustes measure, a novel measure based on Procrustes rotation that enables quantitative comparison of the output of manifold-based embedding algorithms such as LLE (Roweis and Saul, Science 290(5500), 2323---2326, 2000) and Isomap (, Science 290(5500), 2319---2323, 2000). The measure also serves as a natural tool when choosing dimension-reduction parameters. We also present two novel dimension-reduction techniques that attempt to minimize the suggested measure, and compare the results of these techniques to the results of existing algorithms. Finally, we suggest a simple iterative method that can be used to improve the output of existing algorithms."
]
}
|
1108.1636
|
2952835466
|
Manifold learning is a hot research topic in the field of computer science. A crucial issue with current manifold learning methods is that they lack a natural quantitative measure to assess the quality of learned embeddings, which greatly limits their applications to real-world problems. In this paper, a new embedding quality assessment method for manifold learning, named as Normalization Independent Embedding Quality Assessment (NIEQA), is proposed. Compared with current assessment methods which are limited to isometric embeddings, the NIEQA method has a much larger application range due to two features. First, it is based on a new measure which can effectively evaluate how well local neighborhood geometry is preserved under normalization, hence it can be applied to both isometric and normalized embeddings. Second, it can provide both local and global evaluations to output an overall assessment. Therefore, NIEQA can serve as a natural tool in model selection and evaluation tasks for manifold learning. Experimental results on benchmark data sets validate the effectiveness of the proposed method.
|
Venna and Kaski @cite_42 proposed an assessment method which consists of two measures, one for trustworthiness and one for continuity, based on the change of indices of neighbor samples in @math and @math according to pairwise Euclidean distances, respectively. Aguirre al proposed an alternative approach for quantifying the embedding quality, by evaluating the possible overlaps in the low-dimensional embedding. Their assessment is used for automatic choice of the number of nearest neighbors for LLE @cite_39 and also exploited in @cite_7 to evaluate the embedding quality of LLE with optimal regularization parameter. Akkucuk and Carroll @cite_8 independently developed the Agreement Rate (AR) metric which shares the same form to @math . Based on AR, they suggested another useful assessment method called corrected agreement rate, by randomly reorganize the indices of data in @math . Also with AR, France and Carroll @cite_22 proposed a method using the RAND index to evaluate dimensionality reduction methods.
|
{
"cite_N": [
"@cite_22",
"@cite_7",
"@cite_8",
"@cite_42",
"@cite_39"
],
"mid": [
"1688729699",
"1979182421",
"2040608731",
"2043508111",
""
],
"abstract": [
"We develop a metric i¾?, based upon the RAND index, for the comparison and evaluation of dimensionality reduction techniques. This metric is designed to test the preservation of neighborhood structure in derived lower dimensional configurations. We use a customer information data set to show how i¾?can be used to compare dimensionality reduction methods, tune method parameters, and choose solutions when methods have a local optimum problem. We show that i¾?is highly negatively correlated with an alienation coefficient K that is designed to test the recovery of relative distances. In general a method with a good value of i¾?also has a good value of K. However the monotonic regression used by Nonmetric MDS produces solutions with good values of i¾?, but poor values of K.",
"Locally linear embedding (LLE) is a recent unsupervised learning algorithm for non-linear dimensionality reduction of high dimensional data. One advantage of this algorithm is that just two parameters are needed to be set by user: the number of nearest neighbors and a regularization parameter. The choice of the regularization parameter plays an important role in the embedding results. In this paper, an automated method for choosing this parameter is proposed. Besides, in order to objectively qualify the performance of the embedding results, a new measure of embedding quality is suggested. Our approach is experimentally verified on 9 artificial data sets and 2 real world data sets. Numerical results are compared against two methods previously found in the state of art.",
"Dimensionality reduction techniques are used for representing higher dimensional data by a more parsimonious and meaningful lower dimensional structure. In this paper we will study two such approaches, namely Carroll’s Parametric Mapping (abbreviated PARAMAP) (Shepard and Carroll, 1966) and Tenenbaum’s Isometric Mapping (abbreviated Isomap) (Tenenbaum, de Silva, and Langford, 2000). The former relies on iterative minimization of a cost function while the latter applies classical MDS after a preprocessing step involving the use of a shortest path algorithm to define approximate geodesic distances. We will develop a measure of congruence based on preservation of local structure between the input data and the mapped low dimensional embedding, and compare the different approaches on various sets of data, including points located on the surface of a sphere, some data called the \"Swiss Roll data\", and truncated spheres.",
"In a visualization task, every nonlinear projection method needs to make a compromise between trustworthiness and continuity. In a trustworthy projection the visualized proximities hold in the original data as well, whereas a continuous projection visualizes all proximities of the original data. We show experimentally that one of the multidimensional scaling methods, curvilinear components analysis, is good at maximizing trustworthiness. We then extend it to focus on local proximities both in the input and output space, and to explicitly make a user-tunable parameterized compromise between trustworthiness and continuity. The new method compares favorably to alternative nonlinear projection methods.",
""
]
}
|
1108.1636
|
2952835466
|
Manifold learning is a hot research topic in the field of computer science. A crucial issue with current manifold learning methods is that they lack a natural quantitative measure to assess the quality of learned embeddings, which greatly limits their applications to real-world problems. In this paper, a new embedding quality assessment method for manifold learning, named as Normalization Independent Embedding Quality Assessment (NIEQA), is proposed. Compared with current assessment methods which are limited to isometric embeddings, the NIEQA method has a much larger application range due to two features. First, it is based on a new measure which can effectively evaluate how well local neighborhood geometry is preserved under normalization, hence it can be applied to both isometric and normalized embeddings. Second, it can provide both local and global evaluations to output an overall assessment. Therefore, NIEQA can serve as a natural tool in model selection and evaluation tasks for manifold learning. Experimental results on benchmark data sets validate the effectiveness of the proposed method.
|
Tenenbaum al @cite_12 suggested to use the residual variance as a diagnostic measure to evaluate the embedding quality. Given @math and @math , the residual variance is computed by where @math is the standard linear correlation coefficients taken over all entries of @math and @math . Here @math is the approximated geodesic distance between @math and @math @cite_12 and @math . A low value of @math close to zero indicates a good equality of the embedding.
|
{
"cite_N": [
"@cite_12"
],
"mid": [
"2001141328"
],
"abstract": [
"Scientists working with large volumes of high-dimensional data, such as global climate patterns, stellar spectra, or human gene distributions, regularly confront the problem of dimensionality reduction: finding meaningful low-dimensional structures hidden in their high-dimensional observations. The human brain confronts the same problem in everyday perception, extracting from its high-dimensional sensory inputs—30,000 auditory nerve fibers or 106 optic nerve fibers—a manageably small number of perceptually relevant features. Here we describe an approach to solving dimensionality reduction problems that uses easily measured local metric information to learn the underlying global geometry of a data set. Unlike classical techniques such as principal component analysis (PCA) and multidimensional scaling (MDS), our approach is capable of discovering the nonlinear degrees of freedom that underlie complex natural observations, such as human handwriting or images of a face under different viewing conditions. In contrast to previous algorithms for nonlinear dimensionality reduction, ours efficiently computes a globally optimal solution, and, for an important class of data manifolds, is guaranteed to converge asymptotically to the true structure."
]
}
|
1108.2080
|
2157355803
|
Network coding achieves optimal throughput in multicast networks. However, throughput optimality relies on the network nodes or routers to code correctly. A Byzantine node may introduce junk packets in the network (thus polluting downstream packets and causing the sinks to receive the wrong data) or may choose coding coecients
|
@cite_27 have pioneered the field of network coding. They showed the value of coding at routers and provided theoretical bounds on the capacity of such networks. Works such as those of @cite_13 , @cite_22 , and @cite_19 show that, for multicast traffic, linear codes achieve maximum throughput, while coding and decoding can be done in polynomial time. @cite_23 show that random network coding can also achieve maximum network capacity. Network coding has been shown to improve throughput in a variety of networks: wireless @cite_14 , peer-to-peer content distribution @cite_2 , energy @cite_10 , distributed storage @cite_29 , and others.
|
{
"cite_N": [
"@cite_14",
"@cite_22",
"@cite_10",
"@cite_29",
"@cite_19",
"@cite_27",
"@cite_23",
"@cite_2",
"@cite_13"
],
"mid": [
"100491291",
"2106403318",
"2103955286",
"2070769254",
"2123562143",
"2105831729",
"2130350209",
"2107520978",
"2138928022"
],
"abstract": [
"The marriage of network coding and wireless packet networks is a natural and attractive one. So, despite network coding being a still nascent field, a considerable body of work on this subject already exists. In this paper, we give a brief overview of this work and hope, thereby, to provide the reader with a firm theoretical basis from which practical implementations and theoretical extensions can be developed.",
"Consider a communication network in which certain source nodes multicast information to other nodes on the network in the multihop fashion where every node can pass on any of its received data to others. We are interested in how fast each node can receive the complete information, or equivalently, what the information rate arriving at each node is. Allowing a node to encode its received data before passing it on, the question involves optimization of the multicast mechanisms at the nodes. Among the simplest coding schemes is linear coding, which regards a block of data as a vector over a certain base field and allows a node to apply a linear transformation to a vector before passing it on. We formulate this multicast problem and prove that linear coding suffices to achieve the optimum, which is the max-flow from the source to each receiving node.",
"The wireless networking environment presents formidable challenges to the study of broadcasting and multicasting problems. After addressing the characteristics of wireless networks that distinguish them from wired networks, we introduce and evaluate algorithms for tree construction in infrastructureless, all-wireless applications. The performance metric used to evaluate broadcast and multicast trees is energy-efficiency. We develop the broadcast incremental power algorithm, and adapt it to multicast operation as well. This algorithm exploits the broadcast nature of the wireless communication environment, and addresses the need for energy-efficient operation. We demonstrate that our algorithm provides better performance than algorithms that have been developed for the link-based, wired environment.",
"Network coding provides elegant solutions to many data transmission problems. The usage of coding for distributed data storage has also been explored. In this work, we study a joint storage and transmission problem, where a source transmits a file to storage nodes whenever the file is updated, and clients read the file by retrieving data from the storage nodes. The cost includes the transmission cost for file update and file read, as well as the storage cost. We show that such a problem can be transformed into a pure flow problem and is solvable in polynomial time using linear programming. Coding is often necessary for obtaining the optimal solution with the minimum cost. However, we prove that for networks of generalized tree structures, where adjacent nodes can have asymmetric links between them, file splitting ? instead of coding ? is sufficient for achieving optimality. In particular, if there is no constraint on the numbers of bits that can be stored in storage nodes, there exists an optimal solution that always transmits and stores the file as a whole. The proof is accompanied by an algorithm that optimally assigns file segments to storage nodes.",
"The famous max-flow min-cut theorem states that a source node s can send information through a network (V, E) to a sink node t at a rate determined by the min-cut separating s and t. Recently, it has been shown that this rate can also be achieved for multicasting to several sinks provided that the intermediate nodes are allowed to re-encode the information they receive. We demonstrate examples of networks where the achievable rates obtained by coding at intermediate nodes are arbitrarily larger than if coding is not allowed. We give deterministic polynomial time algorithms and even faster randomized algorithms for designing linear codes for directed acyclic graphs with edges of unit capacity. We extend these algorithms to integer capacities and to codes that are tolerant to edge failures.",
"We introduce a new class of problems called network information flow which is inspired by computer network applications. Consider a point-to-point communication network on which a number of information sources are to be multicast to certain sets of destinations. We assume that the information sources are mutually independent. The problem is to characterize the admissible coding rate region. This model subsumes all previously studied models along the same line. We study the problem with one information source, and we have obtained a simple characterization of the admissible coding rate region. Our result can be regarded as the max-flow min-cut theorem for network information flow. Contrary to one's intuition, our work reveals that it is in general not optimal to regard the information to be multicast as a \"fluid\" which can simply be routed or replicated. Rather, by employing coding at the nodes, which we refer to as network coding, bandwidth can in general be saved. This finding may have significant impact on future design of switching systems.",
"A novel randomized network coding approach for robust, distributed transmission and compression of information in networks is presented, and its advantages over routing-based approaches is demonstrated.",
"We propose a new scheme for content distribution of large files that is based on network coding. With network coding, each node of the distribution network is able to generate and transmit encoded blocks of information. The randomization introduced by the coding process eases the scheduling of block propagation, and, thus, makes the distribution more efficient. This is particularly important in large unstructured overlay networks, where the nodes need to make block forwarding decisions based on local information only. We compare network coding to other schemes that transmit unencoded information (i.e. blocks of the original file) and, also, to schemes in which only the source is allowed to generate and transmit encoded packets. We study the performance of network coding in heterogeneous networks with dynamic node arrival and departure patterns, clustered topologies, and when incentive mechanisms to discourage free-riding are in place. We demonstrate through simulations of scenarios of practical interest that the expected file download time improves by more than 20-30 with network coding compared to coding at the server only and, by more than 2-3 times compared to sending unencoded information. Moreover, we show that network coding improves the robustness of the system and is able to smoothly handle extreme situations where the server and nodes leave the system.",
"We take a new look at the issue of network capacity. It is shown that network coding is an essential ingredient in achieving the capacity of a network. Building on recent work by (see Proc. 2001 IEEE Int. Symp. Information Theory, p.102), who examined the network capacity of multicast networks, we extend the network coding framework to arbitrary networks and robust networking. For networks which are restricted to using linear network codes, we find necessary and sufficient conditions for the feasibility of any given set of connections over a given network. We also consider the problem of network recovery for nonergodic link failures. For the multicast setup we prove that there exist coding strategies that provide maximally robust networks and that do not require adaptation of the network interior to the failure pattern in question. The results are derived for both delay-free networks and networks with delays."
]
}
|
1108.2080
|
2157355803
|
Network coding achieves optimal throughput in multicast networks. However, throughput optimality relies on the network nodes or routers to code correctly. A Byzantine node may introduce junk packets in the network (thus polluting downstream packets and causing the sinks to receive the wrong data) or may choose coding coecients
|
A significant amount of research aims to prevent against or recover from pollution attacks . @cite_3 attempt to detect at the sinks if the packets have been modified by a Byzantine node. They do so by adding hash symbols that are obtained as a polynomial function of the data symbols, and pollution is indicated by an inconsistency between the packets and the hashes.
|
{
"cite_N": [
"@cite_3"
],
"mid": [
"2110510622"
],
"abstract": [
"An information-theoretic approach for detecting Byzantine or adversarial modifications in networks employing random linear network coding is described. Each exogenous source packet is augmented with a flexible number of hash symbols that are obtained as a polynomial function of the data symbols. This approach depends only on the adversary not knowing the random coding coefficients of all other packets received by the sink nodes when designing its adversarial packets. We show how the detection probability varies with the overhead (ratio of hash to data symbols), coding field size, and the amount of information unknown to the adversary about the random code."
]
}
|
1108.2080
|
2157355803
|
Network coding achieves optimal throughput in multicast networks. However, throughput optimality relies on the network nodes or routers to code correctly. A Byzantine node may introduce junk packets in the network (thus polluting downstream packets and causing the sinks to receive the wrong data) or may choose coding coecients
|
@cite_26 , for example, discuss rate-optimal protocols that survive Byzantine attacks. Their idea is to append extra parity information to the source messages. @cite_31 provide non-linear protocols for achieving capacity in the presence of Byzantine adversaries.
|
{
"cite_N": [
"@cite_31",
"@cite_26"
],
"mid": [
"2119407552",
"2106831077"
],
"abstract": [
"We consider the problem of achieving capacity through network coding when some of the nodes act covertly as Byzantine adversaries. For several case-study networks, we investigate rates of reliable communication through network coding and upper bounds on capacity. We show that linear codes are inadequate in general, and a slight augmentation of the class of linear codes can increase throughput. Furthermore, we show that even this nonlinear augmentation may not be enough to achieve capacity. We introduce a new class of codes known as bounded-linear that make use of distributions defined over bounded sets of integers subject to linear constraints using real arithmetic.",
"Network coding substantially increases network throughput. But since it involves mixing of information inside the network, a single corrupted packet generated by a malicious node can end up contaminating all the information reaching a destination, preventing decoding. This paper introduces distributed polynomial-time rate-optimal network codes that work in the presence of Byzantine nodes. We present algorithms that target adversaries with different attacking capabilities. When the adversary can eavesdrop on all links and jam links, our first algorithm achieves a rate of , where is the network capacity. In contrast, when the adversary has limited eavesdropping capabilities, we provide algorithms that achieve the higher rate of . Our algorithms attain the optimal rate given the strength of the adversary. They are information-theoretically secure. They operate in a distributed manner, assume no knowledge of the topology, and can be designed and implemented in polynomial time. Furthermore, only the source and destination need to be modified; nonmalicious nodes inside the network are oblivious to the presence of adversaries and implement a classical distributed network code. Finally, our algorithms work over wired and wireless networks."
]
}
|
1108.2080
|
2157355803
|
Network coding achieves optimal throughput in multicast networks. However, throughput optimality relies on the network nodes or routers to code correctly. A Byzantine node may introduce junk packets in the network (thus polluting downstream packets and causing the sinks to receive the wrong data) or may choose coding coecients
|
Wan et. al @cite_28 propose limiting pollution attacks by identifying the malicious nodes, so that they can be isolated, and Le and Markopoulou @cite_1 by identifying the precise location of Byzantine attackers using a homomorphic MAC scheme.
|
{
"cite_N": [
"@cite_28",
"@cite_1"
],
"mid": [
"2175927535",
"2032976609"
],
"abstract": [
"Researchers show that network coding can greatly improve the quality of service in P2P live streaming systems (e.g., IPTV). However, network coding is vulnerable to pollution attacks where malicious nodes inject into the network bogus data blocks that will be combined with other legitimate blocks at downstream nodes, leading to incapability of decoding the original blocks and degradation of network performance. In this paper, we propose a novel approach to limiting pollution attacks by identifying malicious nodes. In our scheme, the malicious nodes can be rapidly identified and isolated, so that the system can quickly recover from pollution attacks. Our scheme can fully satisfy the requirements of live streaming systems, and achieves much higher efficiency than previous schemes. Each node in our scheme only needs to perform several hash computations for an incoming block, incurring very small computational latency in the range of several microseconds. The space overhead added to each block is only 20 bytes. The verification information given to each node is independent of the streaming content and thus does not need to be redistributed. The simulation results based on real PPLive channel overlays show that the process of identifying malicious nodes only takes a few seconds even in the presence of a large number of malicious nodes.",
"Intra-session network coding is known to be vulnerable to Byzantine attacks: malicious nodes can inject bogus packets, which get combined with legitimate blocks at downstream nodes, thus preventing decoding of original packets and degrading the overall performance. In this paper, we provide a novel approach that can identify the precise location of all Byzantine attackers in systems with intra-session network coding. A key ingredient of our approach is a novel homomorphic MAC scheme for expanding subspaces (SpaceMac) that allows to eliminate any uncertainty in identifying attackers via subspace properties. To the best of our knowledge, our scheme is the first that can identify precisely all Byzantine attackers, and at the same time has both low computation (sub- millisecond) and communication overhead (20 bytes per data block). Simulation results show that, even when there are multiple colluding attackers in a network, all of them can be successfully identified in a very short time."
]
}
|
1108.2080
|
2157355803
|
Network coding achieves optimal throughput in multicast networks. However, throughput optimality relies on the network nodes or routers to code correctly. A Byzantine node may introduce junk packets in the network (thus polluting downstream packets and causing the sinks to receive the wrong data) or may choose coding coecients
|
@cite_6 provides a signatures scheme for content distribution with network coding based on linear algebra and cryptography. The source provides all nodes with an invariant vector and public key information. With that information, all nodes can check on the fly the validity of a packet. @cite_8 provides homormorphic signatures schemes for preventing such Byzantine attacks, but the paper is vacuous due to a flaw. @cite_21 and @cite_25 also provide homomorphic signatures schemes, with a construction based on elliptic curves. This scheme augments the packet size by only one constant of about @math bits.
|
{
"cite_N": [
"@cite_21",
"@cite_25",
"@cite_6",
"@cite_8"
],
"mid": [
"",
"1743877615",
"2153098283",
"2137771025"
],
"abstract": [
"",
"Network coding offers increased throughput and improved robustness to random faults in completely decentralized networks. In contrast to traditional routing schemes, however, network coding requires intermediate nodes to modify data packets en route ; for this reason, standard signature schemes are inapplicable and it is a challenge to provide resilience to tampering by malicious nodes. We propose two signature schemes that can be used in conjunction with network coding to prevent malicious modification of data. Our schemes can be viewed as signing linear subspaces in the sense that a signature *** on a subspace V authenticates exactly those vectors in V . Our first scheme is (suitably) homomorphic and has constant public-key size and per-packet overhead. Our second scheme does not rely on random oracles and is based on weaker assumptions. We also prove a lower bound on the length of signatures for linear subspaces showing that our schemes are essentially optimal in this regard.",
"Recent research has shown that network coding can be used in content distribution systems to improve the speed of downloads and the robustness of the systems. However, such systems are very vulnerable to attacks by malicious nodes, and we need to have a signature scheme that allows nodes to check the validity of a packet without decoding. In this paper, we propose such a signature scheme for network coding. Our scheme makes use of the linearity property of the packets in a coded system, and allows nodes to check the integrity of the packets received easily. We show that the proposed scheme is secure, and its overhead is negligible for large files.",
"Network coding provides the possibility to maximize network throughput and receives various applications in traditional computer networks, wireless sensor networks and peer-to-peer systems. However, the applications built on top of network coding are vulnerable to pollution attacks, in which the compromised forwarders can inject polluted or forged messages into networks. Existing schemes addressing pollution attacks either require an extra secure channel or incur high computation overhead. In this paper, we propose an efficient signature-based scheme to detect and filter pollution attacks for the applications adopting linear network coding techniques. Our scheme exploits a novel homomorphic signature function to enable the source to delegate its signing authority to forwarders, that is, the forwarders can generate the signatures for their output messages without contacting the source. This nice property allows the forwarders to verify the received messages, but prohibit them from creating the valid signatures for polluted or forged ones. Our scheme does not need any extra secure channels, and can provide source authentication and batch verification. Experimental results show that it can improve computation efficiency up to ten times compared to some existing one. In addition, we present an alternate lightweight scheme based on a much simpler linear signature function. This alternate scheme provides a tradeoff between computation efficiency and security."
]
}
|
1108.2092
|
2950187556
|
The class of weakly acyclic games, which includes potential games and dominance-solvable games, captures many practical application domains. In a weakly acyclic game, from any starting state, there is a sequence of better-response moves that leads to a pure Nash equilibrium; informally, these are games in which natural distributed dynamics, such as better-response dynamics, cannot enter inescapable oscillations. We establish a novel link between such games and the existence of pure Nash equilibria in subgames. Specifically, we show that the existence of a unique pure Nash equilibrium in every subgame implies the weak acyclicity of a game. In contrast, the possible existence of multiple pure Nash equilibria in every subgame is insufficient for weak acyclicity in general; here, we also systematically identify the special cases (in terms of the number of players and strategies) for which this is sufficient to guarantee weak acyclicity.
|
Weak acyclicity has been specifically addressed in a handful of specially-struc -tured games: in an applied setting, BGP with backup routing @cite_15 , in a game-theoretical setting, games with strategic complementarities'' @cite_17 @cite_10 (a supermodularity condition on lattice-structured strategy sets), and in an algorithmic setting, in several kinds of succinct games @cite_3 . Milchtaich @cite_18 studied Rosenthal's congestion games @cite_14 and proved that, in interesting cases, such games are weakly acyclic even if the payoff functions (utilities) are not universal but player-specific. @cite_13 formulated the cooperative-control-theoretic consensus problem as a potential game (implying that it is weakly acyclic); they also defined and investigated a time-varying version of weak acyclicity.
|
{
"cite_N": [
"@cite_13",
"@cite_18",
"@cite_14",
"@cite_3",
"@cite_15",
"@cite_10",
"@cite_17"
],
"mid": [
"2130080355",
"",
"2054129049",
"2043509535",
"",
"2042040994",
"2094499789"
],
"abstract": [
"This paper presents a view of cooperative control using the language of learning in games. We review the game theoretic concepts of potential games and weakly acyclic games and demonstrate how the specific cooperative control problem of consensus can be formulated in these settings. Motivated by this connection, we build upon game theoretic concepts to better accommodate a broader class of cooperative control problems. In particular, we introduce sometimes weakly acyclic games for time-varying objective functions and action sets, and provide distributed algorithms for convergence to an equilibrium. Finally, we illustrate how to implement these algorithms for the consensus problem in a variety of settings, most notably, in an environment with non-convex obstructions.",
"",
"A class of noncooperative games (of interest in certain applications) is described. Each game in the class is shown to possess at least one Nash equilibrium in pure strategies.",
"Studying Nash dynamics is an important approach for analyzing the outcome of games with repeated selfish behavior of self-interested agents. Sink equilibria has been introduced by Goemans, Mirrokni, and Vetta for studying social cost on Nash dynamics over pure strategies in games. However, they do not address the complexity of sink equilibria in these games. Recently, Fabrikant and Papadimitriou initiated the study of the complexity of Nash dynamics in two classes of games. In order to completely understand the complexity of Nash dynamics in a variety of games, we study the following three questions for various games: (i) given a state in game, can we verify if this state is in a sink equilibrium or not? (ii) given an instance of a game, can we verify if there exists any sink equilibrium other than pure Nash equilibria? and (iii) given an instance of a game, can we verify if there exists a pure Nash equilibrium (i.e, a sink equilibrium with one state)? In this paper, we almost answer all of the above questions for a variety of classes of games with succinct representation, including anonymous games, player-specific and weighted congestion games, valid-utility games, and two-sided market games. In particular, for most of these problems, we show that (i) it is PSPACE-hard to verify if a given state is in a sink equilibrium, (ii) it is NP-hard to verify if there exists a pure Nash equilibrium in the game or not, (iii) it is PSPACE-hard to verify if there exists any sink equilibrium other than pure Nash equilibria. To solve these problems, we illustrate general techniques that could be used to answer similar questions in other classes of games.",
"",
"In a finite game with strategic complementarities, every strategy profile is connected to a Nash equilibrium with a finite individual improvement path. If, additionally, the strategies are scalar, then every strategy profile is connected to a Nash equilibrium with a finite individual best response improvement path.",
"Abstract We study repeated interactions among a fixed set of “low rationality” players who have status quo actions, randomly sample other actions, and change their status quo if the sampled action yields a higher payoff. This behavior generates a random process, the better-reply dynamics . Long run behavior leads to Nash equilibrium in games with the weak finite improvement property , including finite, supermodular games and generic, continuous, two-player, quasi-concave games. If players make mistakes and if several players can sample at the same time, the resulting better-reply dynamics with simultaneous sampling converges to the Pareto optimal Nash equilibrium in common interest games. Journal of Economic Literature Classification Numbers: C70, C72, C73."
]
}
|
1108.2290
|
2949659778
|
We show that every n-point tree metric admits a (1+eps)-embedding into a C(eps) log n-dimensional L_1 space, for every eps > 0, where C(eps) = O((1 eps)^4 log(1 eps)). This matches the natural volume lower bound up to a factor depending only on eps. Previously, it was unknown whether even complete binary trees on n nodes could be embedded in O(log n) dimensions with O(1) distortion. For complete d-ary trees, our construction achieves C(eps) = O(1 eps^2).
|
Unfortunately, the use of the Local Lemma does not extend well to the more difficult setting of arbitrary trees. For the general case, we employ an idea of Schulman @cite_7 based on re-randomization . To see the idea in our simple setting, consider @math to be composed of a root @math , under which lie two copies of @math , which we call @math and @math , having roots @math and @math , respectively.
|
{
"cite_N": [
"@cite_7"
],
"mid": [
"2115551261"
],
"abstract": [
"Let the input to a computation problem be split between two processors connected by a communication link; and let an interactive protocol spl pi be known by which, on any input, the processors can solve the problem using no more than T transmissions of bits between them, provided the channel is noiseless in each direction. We study the following question: if in fact the channel is noisy, what is the effect upon the number of transmissions needed in order to solve the computation problem reliably? Technologically this concern is motivated by the increasing importance of communication as a resource in computing, and by the tradeoff in communications equipment between bandwidth, reliability, and expense. We treat a model with random channel noise. We describe a deterministic method for simulating noiseless-channel protocols on noisy channels, with only a constant slowdown. This is an analog for general, interactive protocols of Shannon's coding theorem, which deals only with data transmission, i.e., one-way protocols. We cannot use Shannon's block coding method because the bits exchanged in the protocol are determined only one at a time, dynamically, in the course of the interaction. Instead, we describe a simulation protocol using a new kind of code, explicit tree codes."
]
}
|
1108.1377
|
2951615684
|
We study network loss tomography based on observing average loss rates over a set of paths forming a tree -- a severely underdetermined linear problem for the unknown link loss probabilities. We examine in detail the role of sparsity as a regularising principle, pointing out that the problem is technically distinct from others in the compressed sensing literature. While sparsity has been applied in the context of tomography, key questions regarding uniqueness and recovery remain unanswered. Our work exploits the tree structure of path measurements to derive sufficient conditions for sparse solutions to be unique and the condition that @math minimization recovers the true underlying solution. We present a fast single-pass linear algorithm for @math minimization and prove that a minimum @math solution is both unique and sparsest for tree topologies. By considering the placement of lossy links within trees, we show that sparse solutions remain unique more often than is commonly supposed. We prove similar results for a noisy version of the problem.
|
The work most closely related to our own is @cite_21 , which answers some of the key questions for CS over graphs. The key difference is that we work with trees instead of general networks. The simpler tree topology enables far greater insight into the sparse and @math solutions, and allows explicit solutions and fast algorithms to be defined. @cite_21 the authors determine the number of random measurements over underlying network paths needed to uniquely recover sparse link solutions. Random measurements however are difficult to justify in the tomography context. Conversely, for a given measurement matrix they provide upper bounds on the number of lossy links consistent with uniqueness of the sparsest solution. These bounds are quite restrictive for trees. For example for any ternary tree, irrespective of its size, the largest allowed number of lossy links is @math , and for a binary tree the price of a uniqueness guarantee is that only a single link may be lossy. In section , we see that for a ternary tree with @math links, even when @math links are lossy, the sparsest solution is still unique for $95
|
{
"cite_N": [
"@cite_21"
],
"mid": [
"2950465072"
],
"abstract": [
"In this paper, motivated by network inference and tomography applications, we study the problem of compressive sensing for sparse signal vectors over graphs. In particular, we are interested in recovering sparse vectors representing the properties of the edges from a graph. Unlike existing compressive sensing results, the collective additive measurements we are allowed to take must follow connected paths over the underlying graph. For a sufficiently connected graph with @math nodes, it is shown that, using @math path measurements, we are able to recover any @math -sparse link vector (with no more than @math nonzero elements), even though the measurements have to follow the graph path constraints. We further show that the computationally efficient @math minimization can provide theoretical guarantees for inferring such @math -sparse vectors with @math path measurements from the graph."
]
}
|
1108.1554
|
2949469986
|
We consider the performance modeling and evaluation of network systems powered with renewable energy sources such as solar and wind energy. Such energy sources largely depend on environmental conditions, which are hard to predict accurately. As such, it may only make sense to require the network systems to support a soft quality of service (QoS) guarantee, i.e., to guarantee a service requirement with a certain high probability. In this paper, we intend to build a solid mathematical foundation to help better understand the stochastic energy constraint and the inherent correlation between QoS and the uncertain energy supply. We utilize a calculus approach to model the cumulative amount of charged energy and the cumulative amount of consumed energy. We derive upper and lower bounds on the remaining energy level based on a stochastic energy charging rate and a stochastic energy discharging rate. By building the bridge between energy consumption and task execution (i.e., service), we study the QoS guarantee under the constraint of uncertain energy sources. We further show how performance bounds can be improved if some strong assumptions can be made.
|
In the deterministic group, Zafer @cite_1 use deterministic network calculus to model traffic arrival and traffic departure. They use a power-rate function to link the traffic departure rate and energy consumption rate. By considering the special features of specific power-rate functions, they formulate and solve the optimal transmission scheduling problem under the given energy constraints. Their work only focuses on single-node analysis and assumes that traffic arrivals and service rates are deterministic. Kansal @cite_10 propose a so-termed to help energy management of a sensor node and determine the performance levels that the sensor node can support. The basic idea of the harvesting theory is to use a leaky bucket model to represent energy supply and energy depletion. Moser @cite_20 describe energy-aware scheduling and prove the conditions for a scheduling algorithm to be optimal in a system whose energy storage is replenished .
|
{
"cite_N": [
"@cite_10",
"@cite_1",
"@cite_20"
],
"mid": [
"1991781995",
"2160706299",
"2092471931"
],
"abstract": [
"Power management is an important concern in sensor networks, because a tethered energy infrastructure is usually not available and an obvious concern is to use the available battery energy efficiently. However, in some of the sensor networking applications, an additional facility is available to ameliorate the energy problem: harvesting energy from the environment. Certain considerations in using an energy harvesting source are fundamentally different from that in using a battery, because, rather than a limit on the maximum energy, it has a limit on the maximum rate at which the energy can be used. Further, the harvested energy availability typically varies with time in a nondeterministic manner. While a deterministic metric, such as residual battery, suffices to characterize the energy availability in the case of batteries, a more sophisticated characterization may be required for a harvesting source. Another issue that becomes important in networked systems with multiple harvesting nodes is that different nodes may have different harvesting opportunity. In a distributed application, the same end-user performance may be achieved using different workload allocations, and resultant energy consumptions at multiple nodes. In this case, it is important to align the workload allocation with the energy availability at the harvesting nodes. We consider the above issues in power management for energy-harvesting sensor networks. We develop abstractions to characterize the complex time varying nature of such sources with analytically tractable models and use them to address key design issues. We also develop distributed methods to efficiently use harvested energy and test these both in simulation and experimentally on an energy-harvesting sensor network, prototyped for this work.",
"Transmission rate adaptation in wireless devices provides a unique opportunity to trade off data service rate with energy consumption. In this paper, we study optimal rate control to minimize transmission energy expenditure subject to strict deadline or other quality-of-service (QoS) constraints. Specifically, the system consists of a wireless transmitter with controllable transmission rate and with strict QoS constraints on data transmission. The goal is to obtain a rate-control policy that minimizes the total transmission energy expenditure while ensuring that the QoS constraints are met. Using a novel formulation based on cumulative curves methodology, we obtain the optimal transmission policy and show that it has a simple and appealing graphical visualization. Utilizing the optimal ldquoofflinerdquo results, we then develop an online transmission policy for an arbitrary stream of packet arrivals and deadline constraints, and show, via simulations, that it is significantly more energy-efficient than a simple head-of-line drain policy. Finally, we generalize the optimal policy results to the case of time-varying power-rate functions.",
"Energy harvesting has recently emerged as a feasible option to increase the operating time of sensor networks. If each node of the network, however, is powered by a fluctuating energy source, common power management solutions have to be reconceived. This holds in particular if real-time responsiveness of a given application has to be guaranteed. Task scheduling at the single nodes should account for the properties of the energy source, capacity of the energy storage as well as deadlines of the single tasks. We show that conventional scheduling algorithms (like e.g. EDF) are not suitable for this scenario. Based on this motivation, we have constructed optimal scheduling algorithms that jointly handle constraints from both energy and time domain. Further we present an admittance test that decides for arbitrary task sets, whether they can be scheduled without deadline violations. To this end, we introduce the concept of energy variability characterization curves (EVCC) which nicely captures the dynamics of various energy sources. Simulation results show that our algorithms allow significant reductions of the battery size compared to Earliest Deadline First scheduling."
]
}
|
1108.1554
|
2949469986
|
We consider the performance modeling and evaluation of network systems powered with renewable energy sources such as solar and wind energy. Such energy sources largely depend on environmental conditions, which are hard to predict accurately. As such, it may only make sense to require the network systems to support a soft quality of service (QoS) guarantee, i.e., to guarantee a service requirement with a certain high probability. In this paper, we intend to build a solid mathematical foundation to help better understand the stochastic energy constraint and the inherent correlation between QoS and the uncertain energy supply. We utilize a calculus approach to model the cumulative amount of charged energy and the cumulative amount of consumed energy. We derive upper and lower bounds on the remaining energy level based on a stochastic energy charging rate and a stochastic energy discharging rate. By building the bridge between energy consumption and task execution (i.e., service), we study the QoS guarantee under the constraint of uncertain energy sources. We further show how performance bounds can be improved if some strong assumptions can be made.
|
In the stochastic group, Markov chain models have been used extensively. Susu @cite_4 use a discrete-time Markov chain in which states represent different energy levels. Some work @cite_5 uses a Markov chain model to capture the influence of clouds and wind on solar radiation intensity. Relevant to stochastic energy modeling, there are many efforts to predict a stochastic energy supply. Lu @cite_17 assess three prediction techniques: regression analysis, moving average, and exponential smoothing. Recas @cite_12 propose a weather-conditioned moving average () model, which adapts to long-term seasonal changes and short-term sudden weather changes. Moser @cite_20 introduce energy variability curves to predict the power provided by a harvesting unit.
|
{
"cite_N": [
"@cite_4",
"@cite_5",
"@cite_12",
"@cite_20",
"@cite_17"
],
"mid": [
"2169108267",
"2128603408",
"2129138504",
"2092471931",
""
],
"abstract": [
"Environmental energy is becoming a feasible alternative for many low-power systems, such as wireless sensor nodes. Designing an environmentally powered device faces several challenges: choosing the exact type of the energy harvester, the energy storage elements and determining the duty cycle of the application. With harvesting, the design process becomes even more difficult because it also has to take into account the unpredictability of the energy source. The contribution of this paper is a methodology that facilitates the analysis of energy harvesting nodes. The existing modeling strategies for battery powered systems are not suitable because they do not capture the uncertainty of the power source. Also, the metrics of interest for battery powered devices are different, as opposed to the harvesting powered ones: in the former case we search to maximize the system lifetime, while in the latter case a more expressive goal is to increase the system availability.",
"A queuing analytical model is presented to investigate the performances of different sleep and wakeup strategies in a solar-powered wireless sensor mesh network where a solar cell is used to charge the battery in a sensor mesh node. While the solar radiation process (and, hence, the energy generation process in a solar cell) is modeled by a stochastic process (i.e., a Markov chain), a linear battery model with relaxation effect is used to model the battery capacity recovery process. Developed based on a multidimensional discrete-time Markov chain, the presented model is used to analyze the performances of different sleep and wakeup strategies in a sensor mesh node. The packet dropping and packet blocking probabilities at a node are the major performance metrics. The numerical results obtained from the analytical model are validated by extensive simulations. In addition, using the queuing model, based on a game-theoretic formulation, we demonstrate how to obtain the optimal parameters for a particular sleep and wakeup strategy. In this case, we formulate a bargaining game by exploiting the trade-off between packet blocking and packet dropping probabilities due to the sleep and wakeup dynamics in a sensor mesh node. The Nash solution is obtained for the equilibrium point of sleep and wakeup probabilities. The presented queuing model, along with the game-theoretic formulation, would be useful for the design and optimization of energy-efficient protocols for solar-powered wireless sensor mesh networks under quality-of-service (QoS) constraints",
"Energy harvesting sensor nodes (EHSNs) have stringent low-energy consump- tion requirements, but they need to concurrently execute several types of tasks (processing, sensing, actuation, etc.). Furthermore, no accurate models exist to predict the energy harvest- ing income in order to adapt at run-time the executing set of prioritized tasks. In this article, we propose a novel power-aware task scheduler for EHSNs, namely, HOLLOWS: Head-of- Line Low-Overhead Wide-priority Service. HOLLOWS uses an energy-constrained prioritized queue model to describe the residence time of tasks entering the system and dynamically selects the set of tasks to execute, according to system accuracy requirements and expected energy. Moreover, HOLLOWS includes a new energy harvesting prediction algorithm, that is, weather-conditioned moving average (WCMA), which we have developed to estimate the solar panel energy income. We have tested HOLLOWS using the real-life working conditions of Shimmer, a sensor node for structural health monitoring. Our results indicate that HOLLOWS accurately predicts the energy available in Shimmer to guarantee a certain damage monitoring quality for long-term autonomous scenarios. Also, HOLLOWS is able to adjust the use of the incoming energy harvesting to achieve high accuracy for rapid event damage assessment (after earthquakes, fires, etc.).",
"Energy harvesting has recently emerged as a feasible option to increase the operating time of sensor networks. If each node of the network, however, is powered by a fluctuating energy source, common power management solutions have to be reconceived. This holds in particular if real-time responsiveness of a given application has to be guaranteed. Task scheduling at the single nodes should account for the properties of the energy source, capacity of the energy storage as well as deadlines of the single tasks. We show that conventional scheduling algorithms (like e.g. EDF) are not suitable for this scenario. Based on this motivation, we have constructed optimal scheduling algorithms that jointly handle constraints from both energy and time domain. Further we present an admittance test that decides for arbitrary task sets, whether they can be scheduled without deadline violations. To this end, we introduce the concept of energy variability characterization curves (EVCC) which nicely captures the dynamics of various energy sources. Simulation results show that our algorithms allow significant reductions of the battery size compared to Earliest Deadline First scheduling.",
""
]
}
|
1108.1554
|
2949469986
|
We consider the performance modeling and evaluation of network systems powered with renewable energy sources such as solar and wind energy. Such energy sources largely depend on environmental conditions, which are hard to predict accurately. As such, it may only make sense to require the network systems to support a soft quality of service (QoS) guarantee, i.e., to guarantee a service requirement with a certain high probability. In this paper, we intend to build a solid mathematical foundation to help better understand the stochastic energy constraint and the inherent correlation between QoS and the uncertain energy supply. We utilize a calculus approach to model the cumulative amount of charged energy and the cumulative amount of consumed energy. We derive upper and lower bounds on the remaining energy level based on a stochastic energy charging rate and a stochastic energy discharging rate. By building the bridge between energy consumption and task execution (i.e., service), we study the QoS guarantee under the constraint of uncertain energy sources. We further show how performance bounds can be improved if some strong assumptions can be made.
|
We develop our analytical framework based on stochastic network calculus @cite_14 @cite_22 @cite_6 @cite_7 . Unlike deterministic network calculus @cite_19 @cite_3 , which searches for the worst-case performance bounds, stochastic network calculus tries to derive performance bounds, but with a small probability that the bounds may not hold true. Since most renewable energy sources, such as solar and wind energy, are not deterministic, stochastic network calculus is a good fit for the performance evaluation of systems using renewable energy. Nevertheless, traditional stochastic network calculus was not originally targeted at modeling such systems. Substantial work is thus required to extend this useful theory.
|
{
"cite_N": [
"@cite_14",
"@cite_22",
"@cite_7",
"@cite_6",
"@cite_3",
"@cite_19"
],
"mid": [
"",
"2116690616",
"1589801689",
"2140314025",
"2147206873",
"1978905175"
],
"abstract": [
"",
"Network calculus is a min-plus system theory for performance evaluation of queuing networks. Its elegance steins from intuitive convolution formulas for concatenation of deterministic servers. Recent research dispenses with the worst-case assumptions of network calculus to develop a probabilistic equivalent that benefits from statistical multiplexing. Significant achievements have been made, owing for example to the theory of effective bandwidths; however, the outstanding scalability set up by concatenation of deterministic servers has not been shown. This paper establishes a concise, probabilistic network calculus with moment generating functions. The presented work features closed-form, end-to-end, probabilistic performance bounds that achieve the objective of scaling linearly in the number of servers in series. The consistent application of moment generating functions put forth in this paper utilizes independence beyond the scope of current statistical multiplexing of flows. A relevant additional gain is demonstrated for tandem servers with independent cross-traffic",
"Network calculus, a theory dealing with queuing systems found in computer networks, focuses on performance guarantees. The development of an information theory for stochastic service-guarantee analysis has been identified as a grand challenge for future networking research. Towards that end, stochastic network calculus, the probabilistic version or generalization of the (deterministic) Network Calculus, has been recognized by researchers as a crucial step. Stochastic Network Calculus presents a comprehensive treatment for the state-of-the-art in stochastic service-guarantee analysis research and provides basic introductory material on the subject, as well as discusses the most recent research in the area. This helpful volume summarizes results for stochastic network calculus, which can be employed when designing computer networks to provide stochastic service guarantees. Features and Topics: Provides a solid introductory chapter, providing useful background knowledge Reviews fundamental concepts and results of deterministic network calculus Includes end-of-chapter problems, as well as summaries and bibliographic comments Defines traffic models and server models for stochastic network calculus Summarizes the basic properties of stochastic network calculus under different combinations of traffic and server models Highlights independent case analysis Discusses stochastic service guarantees under different scheduling disciplines Presents applications to admission control and traffic conformance study using the analysis results Offers an overall summary and some open research challenges for further study of the topic Key Topics: Queuing systems Performance analysis and guarantees Independent case analysis Traffic and server models Analysis of scheduling disciplines Generalized processor sharing Open research challenges Researchers and graduates in the area of performance evaluation of computer communication networks will benefit substantially from this comprehensive and easy-to-follow volume. Professionals will also find it a worthwhile reference text. Professor Yuming Jiang at the Norwegian University of Science and Technology (NTNU) has lectured using the material presented in this text since 2006. Dr Yong Liu works at the Optical Network Laboratory, National University of Singapore, where he researches QoS for optical communication networks and Metro Ethernet networks.",
"A basic calculus is presented for stochastic service guarantee analysis in communication networks. Central to the calculus are two definitions, maximum-(virtual)-backlog-centric (m. b. c) stochastic arrival curve and stochastic service curve, which respectively generalize arrival curve and service curve in the deterministic network calculus framework. With m. b. c stochastic arrival curve and stochastic service curve, various basic results are derived under the (min, +)algebra for the general case analysis, which are crucial to the development of stochastic network calculus. These results include (i)superposition of flows, (ii)concatenation of servers, (iii) output characterization, (iv)per-flow service under aggregation, and (v)stochastic backlog and delay guarantees. In addition, to perform independent case analysis, stochastic strict server is defined, which uses an ideal service process and an impairment process to characterize a server. The concept of stochastic strict server not only allows us to improve the basic results (i)-(v)under the independent case, but also provides a convenient way to find the stochastic service curve of a serve. Moreover, an approach is introduced to find the m.b.c stochastic arrival curve of a flow and the stochastic service curve of a server.",
"Network Calculus.- Application of Network Calculus to the Internet.- Basic Min-plus and Max-plus Calculus.- Min-plus and Max-plus System Theory.- Optimal Multimedia Smoothing.- FIFO Systems and Aggregate Scheduling.- Adaptive and Packet Scale Rate Guarantees.- Time Varying Shapers.- Systems with Losses.",
"From the Publisher: Providing performance guarantees is one of the most important issues for future telecommunication networks. This book describes theoretical developments in performance guarantees for telecommunication networks from the last decade. Written for the benefit of graduate students and scientists interested in telecommunications-network performance this book consists of two parts."
]
}
|
1108.1554
|
2949469986
|
We consider the performance modeling and evaluation of network systems powered with renewable energy sources such as solar and wind energy. Such energy sources largely depend on environmental conditions, which are hard to predict accurately. As such, it may only make sense to require the network systems to support a soft quality of service (QoS) guarantee, i.e., to guarantee a service requirement with a certain high probability. In this paper, we intend to build a solid mathematical foundation to help better understand the stochastic energy constraint and the inherent correlation between QoS and the uncertain energy supply. We utilize a calculus approach to model the cumulative amount of charged energy and the cumulative amount of consumed energy. We derive upper and lower bounds on the remaining energy level based on a stochastic energy charging rate and a stochastic energy discharging rate. By building the bridge between energy consumption and task execution (i.e., service), we study the QoS guarantee under the constraint of uncertain energy sources. We further show how performance bounds can be improved if some strong assumptions can be made.
|
Recent interesting work by Wang @cite_8 uses stochastic network calculus to evaluate the reliability of the power grid with respect to renewable energy. Their energy supply and demand models are a subset of the models we present in . Their work shows a good example of how to tailor our models for a specific application. Another difference is that Wang define the energy supply and energy demand as two de-coupled random processes, while in our work energy discharging is inherently coupled with energy charging.
|
{
"cite_N": [
"@cite_8"
],
"mid": [
"1988498871"
],
"abstract": [
"The renewable energy generation such as solar and wind will constitute an important part of the next generation grid. As the variations of renewable sources may not match the time distribution of load, energy storage is essential for grid stability. Supplemented with energy storage, we investigate the feasibility of integrating solar photovoltaic (PV) panels and wind turbines into the grid. To deal with the fluctuation in both the power generation and demand, we borrow the ideas from stochastic network calculus and build a stochastic model for the power supply reliability with different renewable energy configurations. To illustrate the validity of the model, we conduct a case study for the integration of renewable energy sources into the power system of an island off the coast of Southern California. Performance of the hybrid system under study is assessed by employing the stochastic model, e.g., with a set of system configurations, the long-term expected Fraction of Time that energy Not-Served (FTNS) of a given period can be obtained."
]
}
|
1108.1554
|
2949469986
|
We consider the performance modeling and evaluation of network systems powered with renewable energy sources such as solar and wind energy. Such energy sources largely depend on environmental conditions, which are hard to predict accurately. As such, it may only make sense to require the network systems to support a soft quality of service (QoS) guarantee, i.e., to guarantee a service requirement with a certain high probability. In this paper, we intend to build a solid mathematical foundation to help better understand the stochastic energy constraint and the inherent correlation between QoS and the uncertain energy supply. We utilize a calculus approach to model the cumulative amount of charged energy and the cumulative amount of consumed energy. We derive upper and lower bounds on the remaining energy level based on a stochastic energy charging rate and a stochastic energy discharging rate. By building the bridge between energy consumption and task execution (i.e., service), we study the QoS guarantee under the constraint of uncertain energy sources. We further show how performance bounds can be improved if some strong assumptions can be made.
|
Finally, related to analytical frameworks for performance modeling, there is a large body of research on energy-aware scheduling algorithms. For example, Niyato @cite_5 investigate the impact of different sleep and wake-up strategies on data communication among solar-powered wireless nodes. @cite_11 , Vigorito propose an adaptive duty-cycling algorithm that ensures operational power levels at wireless sensor nodes regardless of changing environmental conditions. @cite_13 , Gorlatova measure the energy availability in indoor environment and based on the measurement results they develop algorithms to determine energy allocation in systems with predictable energy inputs and in systems where energy inputs are stochastic. In the stochastic model, they assume that energy inputs are i.i.d. random variables. Unlike the above work, our analytical framework is generic and uses only abstract notations on energy charging discharging amount, traffic arrival amount, etc. As such, all the above work could be treated as a special case of our more general framework in which the abstract functions are replaced with concrete ones for specific applications.
|
{
"cite_N": [
"@cite_5",
"@cite_13",
"@cite_11"
],
"mid": [
"2128603408",
"2123022663",
"2001308229"
],
"abstract": [
"A queuing analytical model is presented to investigate the performances of different sleep and wakeup strategies in a solar-powered wireless sensor mesh network where a solar cell is used to charge the battery in a sensor mesh node. While the solar radiation process (and, hence, the energy generation process in a solar cell) is modeled by a stochastic process (i.e., a Markov chain), a linear battery model with relaxation effect is used to model the battery capacity recovery process. Developed based on a multidimensional discrete-time Markov chain, the presented model is used to analyze the performances of different sleep and wakeup strategies in a sensor mesh node. The packet dropping and packet blocking probabilities at a node are the major performance metrics. The numerical results obtained from the analytical model are validated by extensive simulations. In addition, using the queuing model, based on a game-theoretic formulation, we demonstrate how to obtain the optimal parameters for a particular sleep and wakeup strategy. In this case, we formulate a bargaining game by exploiting the trade-off between packet blocking and packet dropping probabilities due to the sleep and wakeup dynamics in a sensor mesh node. The Nash solution is obtained for the equilibrium point of sleep and wakeup probabilities. The presented queuing model, along with the game-theoretic formulation, would be useful for the design and optimization of energy-efficient protocols for solar-powered wireless sensor mesh networks under quality-of-service (QoS) constraints",
"Recent advances in energy harvesting materials and ultra-low-power communications will soon enable the realization of networks composed of energy harvesting devices. These devices will operate using very low ambient energy, such as indoor light energy. We focus on characterizing the energy availability in indoor environments and on developing energy allocation algorithms for energy harvesting devices. First, we present results of our long-term indoor radiant energy measurements, which provide important inputs required for algorithm and system design (e.g., determining the required battery sizes). Then, we focus on algorithm development, which requires nontraditional approaches, since energy harvesting shifts the nature of energy-aware protocols from minimizing energy expenditure to optimizing it. Moreover, in many cases, different energy storage types (rechargeable battery and a capacitor) require different algorithms. We develop algorithms for determining time fair energy allocation in systems with predictable energy inputs, as well as in systems where energy inputs are stochastic.",
"Increasingly many wireless sensor network deployments are using harvested environmental energy to extend system lifetime. Because the temporal profiles of such energy sources exhibit great variability due to dynamic weather patterns, an important problem is designing an adaptive duty-cycling mechanism that allows sensor nodes to maintain their power supply at sufficient levels (energy neutral operation) by adapting to changing environmental conditions. Existing techniques to address this problem are minimally adaptive and assume a priori knowledge of the energy profile. While such approaches are reasonable in environments that exhibit low variance, we find that it is highly inefficient in more variable scenarios. We introduce a new technique for solving this problem based on results from adaptive control theory and show that we achieve better performance than previous approaches on a broader class of energy source data sets. Additionally, we include a tunable mechanism for reducing the variance of the node's duty cycle over time, which is an important feature in tasks such as event monitoring. We obtain reductions in variance as great as two-thirds without compromising task performance or ability to maintain energy neutral operation."
]
}
|
1108.1130
|
2950382808
|
The Travelling Salesman Problem is one the most fundamental and most studied problems in approximation algorithms. For more than 30 years, the best algorithm known for general metrics has been Christofides's algorithm with approximation factor of 3 2, even though the so-called Held-Karp LP relaxation of the problem is conjectured to have the integrality gap of only 4 3. Very recently, significant progress has been made for the important special case of graphic metrics, first by Oveis , and then by Momke and Svensson. In this paper, we provide an improved analysis for the approach introduced by Momke and Svensson yielding a bound of 13 9 on the approximation factor, as well as a bound of 19 12+epsilon for any epsilon>0 for a more general Travelling Salesman Path Problem in graphic metrics.
|
The Travelling Salesman Problem (TSP) is one the most fundamental and most studied problems in combinatorial optimization, and aproximation algorithms in particular. In the most standard version of the problem, we are given a metric @math and the goal is to find a closed tour that visits each point of @math exactly once and has minimum total cost, as measured by @math . This problem is APX-hard, and the best known approximation factor of @math was obtained by Christofides @cite_8 more than thirty years ago. However, the so-called Held-Karp LP relaxation of TSP is conjectured to have an integrality gap of @math . It is known to have a gap at least that big, however the best known upper bound @cite_0 for the gap is given by Christofides's algorithm and equal to @math .
|
{
"cite_N": [
"@cite_0",
"@cite_8"
],
"mid": [
"2063836932",
"2117226423"
],
"abstract": [
"Abstract In their 1971 paper on the travelling salesman problem and minimum spanning trees, Held and Karp showed that finding an optimally weighted 1-tree is equivalent to solving a linear program for the traveling salesman problem (TSP) with only node-degree constraints and subtour elimination constraints. In this paper we show that the Held-Karp 1-trees have a certain monotonicity property: given a particular instance of the symmetric TSP with triangle inequality, the cost of the minimum weighted 1-tree is monotonic with respect to the set of nodes included. As a consequence, we obtain an alternate proof of a result of Wolsey and show that linear programs with node-degree and subtour elimination constraints must have a cost at least 2 3 OPT where OPT is the cost of the optimum solution to the TSP instance.",
"Abstract : An O(n sup 3) heuristic algorithm is described for solving n-city travelling salesman problems (TSP) whose cost matrix satisfies the triangularity condition. The algorithm involves as substeps the computation of a shortest spanning tree of the graph G defining the TSP, and the finding of a minimum cost perfect matching of a certain induced subgraph of G. A worst-case analysis of this heuristic shows that the ratio of the answer obtained to the optimum TSP solution is strictly less than 3 2. This represents a 50 reduction over the value 2 which was the previously best known such ratio for the performance of other polynomial-growth algorithms for the TSP."
]
}
|
1108.1130
|
2950382808
|
The Travelling Salesman Problem is one the most fundamental and most studied problems in approximation algorithms. For more than 30 years, the best algorithm known for general metrics has been Christofides's algorithm with approximation factor of 3 2, even though the so-called Held-Karp LP relaxation of the problem is conjectured to have the integrality gap of only 4 3. Very recently, significant progress has been made for the important special case of graphic metrics, first by Oveis , and then by Momke and Svensson. In this paper, we provide an improved analysis for the approach introduced by Momke and Svensson yielding a bound of 13 9 on the approximation factor, as well as a bound of 19 12+epsilon for any epsilon>0 for a more general Travelling Salesman Path Problem in graphic metrics.
|
One of the natural directions of attacking these problem is to consider special cases and several attempts of this nature has been made. The most interesting one is by far the graphic TSP TSPP, where we assume that the given metric is the shortest path metric of an undirected graph. Equivalently, in graphic TSP we are given an undirected graph @math and we need to find a shortest tour that visits each vertex . Yet another formulation would ask for a minimum size Eulerian multigraph spanning @math and only using edges of @math . Similar formulations apply to the graphic TSPP case. The reason why these special cases are very interesting is that they seem to include the difficult inputs of TSP TSPP. Not only are they APX-hard (see @cite_7 ), but also the standard examples showing that the Held-Karp relaxation has a gap of at least @math in the TSP case and @math in the TSPP case, are in fact graphic metrics.
|
{
"cite_N": [
"@cite_7"
],
"mid": [
"2142607374"
],
"abstract": [
"We consider the special case of the traveling salesman problem (TSP) in which the distance metric is the shortest-path metric of a planar unweighted graph. We present a polynomial-time approximation scheme (PTAS) for this problem."
]
}
|
1108.1130
|
2950382808
|
The Travelling Salesman Problem is one the most fundamental and most studied problems in approximation algorithms. For more than 30 years, the best algorithm known for general metrics has been Christofides's algorithm with approximation factor of 3 2, even though the so-called Held-Karp LP relaxation of the problem is conjectured to have the integrality gap of only 4 3. Very recently, significant progress has been made for the important special case of graphic metrics, first by Oveis , and then by Momke and Svensson. In this paper, we provide an improved analysis for the approach introduced by Momke and Svensson yielding a bound of 13 9 on the approximation factor, as well as a bound of 19 12+epsilon for any epsilon>0 for a more general Travelling Salesman Path Problem in graphic metrics.
|
Very recently, significant progress has been made in approximating the graphic TSP and TSPP. First, Oveis @cite_6 gave an algorithm with an approximation factor @math for graphic TSP. Despite @math being of the order of @math , this is considered a major breakthrough. Following that, M "omke and Svensson @cite_2 obtained a significantly better approximation factor of @math for graphic TSP, as well as factor @math for graphic TSPP, for any @math . Their approach uses matchings in a truly ingenious way. Whereas most earlier approaches (including that of Christofides @cite_8 as well as Oveis @cite_6 ) add edges of a matching to a spanning tree to make it Eulerian, the new approach is based on adding and removing the matching edges. This process is guided by a so-called removable pairing of edges which essentially encodes the information on which edges can be simultanously removed from the graph without disconnecting it. A large removable pairing of edges is found by computing a minimum cost circulation in a certain auxiliary flow network, and the bounds on the cost of this circulation translate into bounds on the size of the resulting TSP tour path.
|
{
"cite_N": [
"@cite_8",
"@cite_6",
"@cite_2"
],
"mid": [
"2117226423",
"",
"2950432526"
],
"abstract": [
"Abstract : An O(n sup 3) heuristic algorithm is described for solving n-city travelling salesman problems (TSP) whose cost matrix satisfies the triangularity condition. The algorithm involves as substeps the computation of a shortest spanning tree of the graph G defining the TSP, and the finding of a minimum cost perfect matching of a certain induced subgraph of G. A worst-case analysis of this heuristic shows that the ratio of the answer obtained to the optimum TSP solution is strictly less than 3 2. This represents a 50 reduction over the value 2 which was the previously best known such ratio for the performance of other polynomial-growth algorithms for the TSP.",
"",
"We present a framework for approximating the metric TSP based on a novel use of matchings. Traditionally, matchings have been used to add edges in order to make a given graph Eulerian, whereas our approach also allows for the removal of certain edges leading to a decreased cost. For the TSP on graphic metrics (graph-TSP), the approach yields a 1.461-approximation algorithm with respect to the Held-Karp lower bound. For graph-TSP restricted to a class of graphs that contains degree three bounded and claw-free graphs, we show that the integrality gap of the Held-Karp relaxation matches the conjectured ratio 4 3. The framework allows for generalizations in a natural way and also leads to a 1.586-approximation algorithm for the traveling salesman path problem on graphic metrics where the start and end vertices are prespecified."
]
}
|
1108.1130
|
2950382808
|
The Travelling Salesman Problem is one the most fundamental and most studied problems in approximation algorithms. For more than 30 years, the best algorithm known for general metrics has been Christofides's algorithm with approximation factor of 3 2, even though the so-called Held-Karp LP relaxation of the problem is conjectured to have the integrality gap of only 4 3. Very recently, significant progress has been made for the important special case of graphic metrics, first by Oveis , and then by Momke and Svensson. In this paper, we provide an improved analysis for the approach introduced by Momke and Svensson yielding a bound of 13 9 on the approximation factor, as well as a bound of 19 12+epsilon for any epsilon>0 for a more general Travelling Salesman Path Problem in graphic metrics.
|
In the next section we present previous results relevant to the contributions of this paper, in particular we recall key definitions and theorems of M "omke and Svensson @cite_2 . In we present the improved upper bound on the cost of the core part of the circulation, as well as an almost matching lower bound. In we prove that the correction part of the circulation is essentially free. Finally, in we apply the results of the previous sections to obtain improved approximation algorithms for graphic TSP and TSPP.
|
{
"cite_N": [
"@cite_2"
],
"mid": [
"2950432526"
],
"abstract": [
"We present a framework for approximating the metric TSP based on a novel use of matchings. Traditionally, matchings have been used to add edges in order to make a given graph Eulerian, whereas our approach also allows for the removal of certain edges leading to a decreased cost. For the TSP on graphic metrics (graph-TSP), the approach yields a 1.461-approximation algorithm with respect to the Held-Karp lower bound. For graph-TSP restricted to a class of graphs that contains degree three bounded and claw-free graphs, we show that the integrality gap of the Held-Karp relaxation matches the conjectured ratio 4 3. The framework allows for generalizations in a natural way and also leads to a 1.586-approximation algorithm for the traveling salesman path problem on graphic metrics where the start and end vertices are prespecified."
]
}
|
1108.1055
|
2950098584
|
Given a set of wireless links, a fundamental problem is to find the largest subset that can transmit simultaneously, within the SINR model of interference. Significant progress on this problem has been made in recent years. In this note, we study the problem in the setting where we are given a fixed set of arbitrary powers each sender must use, and an arbitrary gain matrix defining how signals fade. This variation of the problem appears immune to most algorithmic approaches studied in the literature. Indeed it is very hard to approximate since it generalizes the max independent set problem. Here, we propose a simple semi-definite programming approach to the problem that yields constant factor approximation, if the optimal solution is strictly larger than half of the input size.
|
Moscibroda and Wattenhofer @cite_7 were the first to study of the of arbitrary set of wireless links. Early work on approximation algorithms produced approximation factors that grew with structural properties of the network @cite_18 @cite_25 @cite_13 .
|
{
"cite_N": [
"@cite_18",
"@cite_13",
"@cite_25",
"@cite_7"
],
"mid": [
"2148868861",
"",
"2098270168",
"2098480450"
],
"abstract": [
"To date, topology control in wireless ad hoc and sensor networks--the study of how to compute from the given communication network a subgraph with certain beneficial properties .has been considered as a static problem only; the time required to actually schedule the links of a computed topology without message collision was generally ignored. In this paper we analyze topology control in the context of the physical Signal-to-Interference-plus-Noise-Ratio (SINR) model, focusing on the question of how and how fast the links of a resulting topology can actually be realized over time.For this purpose, we define and study a generalized version of the SINR model and obtain theoretical upper bounds on the scheduling complexity of arbitrary topologies in wireless networks. Specifically, we prove that even in worst-case networks, if the signals are transmitted with correctly assigned transmission power levels, the number of time slots required to successfully schedule all links of an arbitrary topology is proportional to the squared logarithm of the number of network nodes times a previously defined static interference measure Interestingly, although originally considered without explicit accounting for signal collision in the SINR model, this static interference measure plays an important role in the analysis of link scheduling with physical link interference. Our result thus bridges the gap between static graph-based interference models and the physical SINR model. Based on these results, we also show that when it comes to scheduling, requiring the communication links to be symmetric may imply significantly higher costs as opposed to topologies allowing unidirectional links.",
"",
"In wireless networks mutual interference impairs the quality of received signals and might even prevent the correct reception of messages. It is therefore of paramount importance to dispose of power control and scheduling algorithms, coordinating the transmission of communication requests. We propose a new measure disturbance in order to comprise the intrinsic difficulty of finding a short schedule for a problem instance. Previously known approaches suffer from extremely bad performance in certain network scenarios even if disturbance is low. To overcome this problem, we present a novel scheduling algorithm for which we give analytical worst-case guarantees on its performance. Compared to previously known solutions, the algorithm achieves a speed up, which can be exponential in the size of the network.",
"We define and study the scheduling complexity in wireless networks, which expresses the theoretically achievable efficiency of MAC layer protocols. Given a set of communication requests in arbitrary networks, the scheduling complexity describes the amount of time required to successfully schedule all requests. The most basic and important network structure in wireless networks being connectivity, we study the scheduling complexity of connectivity, i.e., the minimal amount of time required until a connected structure can be scheduled. In this paper, we prove that the scheduling complexity of connectivity grows only polylogarithmically in the number of nodes. Specifically, we present a novel scheduling algorithm that successfully schedules a strongly connected set of links in time O(logn) even in arbitrary worst-case networks. On the other hand, we prove that standard MAC layer or scheduling protocols can perform much worse. Particularly, any protocol that either employs uniform or linear (a node’s transmit power is proportional to the minimum power required to reach its intended receiver) power assignment has a Ω(n) scheduling complexity in the worst case, even for simple communication requests. In contrast, our polylogarithmic scheduling algorithm allows many concurrent transmission by using an explicitly formulated non-linear power assignment scheme. Our results show that even in large-scale worst-case networks, there is no theoretical scalability problem when it comes to scheduling transmission requests, thus giving an interesting complement to the more pessimistic bounds for the capacity in wireless networks. All results are based on the physical model of communication, which takes into account that the signal-tonoise plus interference ratio (SINR) at a receiver must be above a certain threshold if the transmission is to be received correctly."
]
}
|
1108.1055
|
2950098584
|
Given a set of wireless links, a fundamental problem is to find the largest subset that can transmit simultaneously, within the SINR model of interference. Significant progress on this problem has been made in recent years. In this note, we study the problem in the setting where we are given a fixed set of arbitrary powers each sender must use, and an arbitrary gain matrix defining how signals fade. This variation of the problem appears immune to most algorithmic approaches studied in the literature. Indeed it is very hard to approximate since it generalizes the max independent set problem. Here, we propose a simple semi-definite programming approach to the problem that yields constant factor approximation, if the optimal solution is strictly larger than half of the input size.
|
The first constant factor approximation algorithm was obtained for capacity problem for uniform power in @cite_10 (see also @cite_20 ) in @math with @math . Fangh "anel, Kesselheim and V "ocking @cite_27 gave an algorithm that uses at most @math slots for the scheduling problem with power assignment @math , that holds in general distance metrics.
|
{
"cite_N": [
"@cite_27",
"@cite_10",
"@cite_20"
],
"mid": [
"2154125468",
"2106242763",
"2100316242"
],
"abstract": [
"In the interference scheduling problem, one is given a set of n communication requests described by source-destination pairs of nodes from a metric space. The nodes correspond to devices in a wireless network. Each pair must be assigned a power level and a color such that the pairs in each color class can communicate simultaneously at the specified power levels. The feasibility of simultaneous communication within a color class is defined in terms of the Signal to Interference plus Noise Ratio (SINR) that compares the strength of a signal at a receiver to the sum of the strengths of other signals. The objective is to minimize the number of colors as this corresponds to the time needed to schedule all requests. We introduce an instance-based measure of interference, denoted by I, that enables us to improve on previous results for the interference scheduling problem. We prove the upper and lower bounds in terms of I on the number of steps needed for scheduling a set of requests. For general power assignments, we prove a lower bound of @W(I ([email protected])) steps, where @D denotes the aspect ratio of the metric. When restricting to the two-dimensional Euclidean space (as in the previous work) the bound improves to @W(I [email protected]). Alternatively, when restricting to linear power assignments, the lower bound improves even to @W(I). The lower bounds are complemented by an efficient algorithm computing a schedule for linear power assignments using only O(Ilogn) steps. A more sophisticated algorithm computes a schedule using even only O(I+log^2n) steps. For dense instances in the two-dimensional Euclidean space, this gives a constant factor approximation for scheduling under linear power assignments, which shows that the price for using linear (and, hence, energy-efficient) power assignments is bounded by a factor of O([email protected]). In addition, we extend these results for single-hop scheduling to multi-hop scheduling and combined scheduling and routing problems, where our analysis generalizes the previous results towards general metrics and improves on the previous approximation factors.",
"In this work we study the problem of determining the throughput capacity of a wireless network. We propose a scheduling algorithm to achieve this capacity within an approximation factor. Our analysis is performed in the physical interference model, where nodes are arbitrarily distributed in Euclidean space. We consider the problem separately from the routing problem and the power control problem, i.e., all requests are single-hop, and all nodes transmit at a fixed power level. The existing solutions to this problem have either concentrated on special-case topologies, or presented optimality guarantees which become arbitrarily bad (linear in the number of nodes) depending on the network's topology. We propose the first scheduling algorithm with approximation guarantee independent of the topology of the network. The algorithm has a constant approximation guarantee for the problem of maximizing the number of links scheduled in one time-slot. Furthermore, we obtain a O(log n) approximation for the problem of minimizing the number of time slots needed to schedule a given set of requests. Simulation results indicate that our algorithm does not only have an exponentially better approximation ratio in theory, but also achieves superior performance in various practical network scenarios. Furthermore, we prove that the analysis of the algorithm is extendable to higher-dimensional Euclidean spaces, and to more realistic bounded-distortion spaces, induced by non-isotropic signal distortions. Finally, we show that it is NP-hard to approximate the scheduling problem to within n 1-epsiv factor, for any constant epsiv > 0, in the non-geometric SINR model, in which path-loss is independent of the Euclidean coordinates of the nodes.",
"In this paper we address a common question in wireless communication: How long does it take to satisfy an arbitrary set of wireless communication requests? This problem is known as the wireless scheduling problem. Our main result proves that wireless scheduling is in APX. In addition we present a robustness result, showing that constant parameter and model changes will modify the result only by a constant."
]
}
|
1108.1055
|
2950098584
|
Given a set of wireless links, a fundamental problem is to find the largest subset that can transmit simultaneously, within the SINR model of interference. Significant progress on this problem has been made in recent years. In this note, we study the problem in the setting where we are given a fixed set of arbitrary powers each sender must use, and an arbitrary gain matrix defining how signals fade. This variation of the problem appears immune to most algorithmic approaches studied in the literature. Indeed it is very hard to approximate since it generalizes the max independent set problem. Here, we propose a simple semi-definite programming approach to the problem that yields constant factor approximation, if the optimal solution is strictly larger than half of the input size.
|
Kesselheim obtained a @math -approximation algorithm for the capacity problem with power control for doubling metrics @cite_22 . Around the same time, the first constant factor algorithm for all sub-linear, length monotone power assignments was achieved on general metrics @cite_6 . Other recent studies in the SINR model include work on topological maps @cite_14 , distributed algorithms for scheduling @cite_24 , distributed power control @cite_9 and auction based spectrum allocation @cite_3 .
|
{
"cite_N": [
"@cite_14",
"@cite_22",
"@cite_9",
"@cite_6",
"@cite_3",
"@cite_24"
],
"mid": [
"2952813714",
"1584433497",
"2136756148",
"2950068681",
"",
""
],
"abstract": [
"In this paper we study the topological properties of wireless communication maps and their usability in algorithmic design. We consider the SINR model, which compares the received power of a signal at a receiver against the sum of strengths of other interfering signals plus background noise. To describe the behavior of a multi-station network, we use the convenient representation of a . In the SINR model, the resulting partitions the plane into reception zones, one per station, and the complementary region of the plane where no station can be heard. We consider the general case where transmission energies are arbitrary (or non-uniform). Under that setting, the reception zones are not necessarily convex or even connected. This poses the algorithmic challenge of designing efficient point location techniques as well as the theoretical challenge of understanding the geometry of SINR diagrams. We achieve several results in both directions. We establish a form of weaker convexity in the case where stations are aligned on a line. In addition, one of our key results concerns the behavior of a @math -dimensional map. Specifically, although the @math -dimensional map might be highly fractured, drawing the map in one dimension higher \"heals\" the zones, which become connected. In addition, as a step toward establishing a weaker form of convexity for the @math -dimensional map, we study the interference function and show that it satisfies the maximum principle. Finally, we turn to consider algorithmic applications, and propose a new variant of approximate point location.",
"In modern wireless networks devices are able to set the power for each transmission carried out. Experimental but also theoretical results indicate that such power control can improve the network capacity significantly. We study this problem in the physical interference model using SINR constraints. In the SINR capacity maximization problem, we are given n pairs of senders and receivers, located in a metric space (usually a so-called fading metric). The algorithm shall select a subset of these pairs and choose a power level for each of them with the objective of maximizing the number of simultaneous communications. This is, the selected pairs have to satisfy the SINR constraints with respect to the chosen powers. We present the first algorithm achieving a constant-factor approximation in fading metrics. The best previous results depend on further network parameters such as the ratio of the maximum and the minimum distance between a sender and its receiver. Expressed only in terms of n, they are (trivial) Ω(n) approximations. Our algorithm still achieves an O(log n) approximation if we only assume to have a general metric space rather than a fading metric. Furthermore, existing approaches work well together with the algorithm allowing it to be used in singlehop and multi-hop scheduling scenarios. Here, we also get polylog n approximations.",
"We study convergence of distributed protocols for power control in a non-cooperative wireless transmission scenario. There are n wireless communication requests or links that experience interference and noise. To be successful a link must satisfy an SINR constraint. Each link is a rational selfish agent that strives to be successful with the least power that is required. A classic approach to this problem is the fixed-point iteration due to Foschini and Miljanic , for which we prove the first bounds on worst-case convergence times - after roughly O(n n) rounds all SINR constraints are nearly satisfied. When agents try to satisfy each constraint exactly, however, links might not be successful at all. For this case, we design a novel framework for power control using regret learning algorithms and iterative discretization. While the exact convergence times must rely on a variety of parameters, we show that roughly a polynomial number of rounds suffices to make every link successful during at least a constant fraction of all previous rounds.",
"The capacity of a wireless network is the maximum possible amount of simultaneous communication, taking interference into account. Formally, we treat the following problem. Given is a set of links, each a sender-receiver pair located in a metric space, and an assignment of power to the senders. We seek a maximum subset of links that are feasible in the SINR model: namely, the signal received on each link should be larger than the sum of the interferences from the other links. We give a constant-factor approximation that holds for any length-monotone, sub-linear power assignment and any distance metric. We use this to give essentially tight characterizations of capacity maximization under power control using oblivious power assignments. Specifically, we show that the mean power assignment is optimal for capacity maximization of bi-directional links, and give a tight @math -approximation of scheduling bi-directional links with power control using oblivious power. For uni-directional links we give a nearly optimal @math -approximation to the power control problem using mean power, where @math is the ratio of longest and shortest links. Combined, these results clarify significantly the centralized complexity of wireless communication problems.",
"",
""
]
}
|
1108.0027
|
1713870197
|
Degree distribution models are incredibly important tools for analyzing and understanding the structure and formation of social networks, and can help guide the design of efficient graph algorithms. In particular, the Power-law degree distribution has long been used to model the structure of online social networks, and is the basis for algorithms and heuristics in graph applications such as influence maximization and social search. Along with recent measurement results, our interest in this topic was sparked by our own experimental results on social graphs that deviated significantly from those predicted by a Power-law model. In this work, we seek a deeper understanding of these deviations, and propose an alternative model with significant implications on graph algorithms and applications. We start by quantifying this artifact using a variety of real social graphs, and show that their structures cannot be accurately modeled using elementary distributions including the Power-law. Instead, we propose the Pareto-Lognormal (PLN) model, verify its goodness-of-fit using graphical and statistical methods, and present an analytical study of its asymptotical differences with the Power-law. To demonstrate the quantitative benefits of the PLN model, we compare the results of three wide-ranging graph applications on real social graphs against those on synthetic graphs generated using the PLN and Power-law models. We show that synthetic graphs generated using PLN are much better predictors of degree distributions in real graphs, and produce experimental results with errors that are orders-of-magnitude smaller than those produced by the Power-law model.
|
Social Networks. Historically, both online and offline social networks have been explained through the seminal Power-law model. Power-law is often described with the rich gets richer'' paradigm which has been proven to hold in real datasets across multiple disciplines, including Internet router topology graphs, biological graphs @cite_37 @cite_20 , human mobility traces @cite_10 , etc.
|
{
"cite_N": [
"@cite_37",
"@cite_10",
"@cite_20"
],
"mid": [
"2000042664",
"2096509679",
"2112493377"
],
"abstract": [
"Power-law distributions occur in many situations of scientific interest and have significant consequences for our understanding of natural and man-made phenomena. Unfortunately, the detection and characterization of power laws is complicated by the large fluctuations that occur in the tail of the distribution—the part of the distribution representing large but rare events—and by the difficulty of identifying the range over which power-law behavior holds. Commonly used methods for analyzing power-law data, such as least-squares fitting, can produce substantially inaccurate estimates of parameters for power-law distributions, and even in cases where such methods return accurate answers they are still unsatisfactory because they give no indication of whether the data obey a power law at all. Here we present a principled statistical framework for discerning and quantifying power-law behavior in empirical data. Our approach combines maximum-likelihood fitting methods with goodness-of-fit tests based on the Kolmogorov-Smirnov (KS) statistic and likelihood ratios. We evaluate the effectiveness of the approach with tests on synthetic data and give critical comparisons to previous approaches. We also apply the proposed methods to twenty-four real-world data sets from a range of different disciplines, each of which has been conjectured to follow a power-law distribution. In some cases we find these conjectures to be consistent with the data, while in others the power law is ruled out.",
"We study data transfer opportunities between wireless devices carried by humans. We observe that the distribution of the intercontact time (the time gap separating two contacts between the same pair of devices) may be well approximated by a power law over the range [10 minutes; 1 day]. This observation is confirmed using eight distinct experimental data sets. It is at odds with the exponential decay implied by the most commonly used mobility models. In this paper, we study how this newly uncovered characteristic of human mobility impacts one class of forwarding algorithms previously proposed. We use a simplified model based on the renewal theory to study how the parameters of the distribution impact the performance in terms of the delivery delay of these algorithms. We make recommendations for the design of well-founded opportunistic forwarding algorithms in the context of human-carried devices",
"Global surveys of genomes measure the usage of essential molecular parts, defined here as protein families, superfamilies or folds, in different organisms. Based on surveys of the first 20 completely sequenced gen- omes, we observe that the occurrence of these parts follows a power-law distribution. That is, the number of distinct parts (F) with a given geno- mic occurrence (V) decays as Fa aVb , with a few parts occurring many times and most occurring infrequently. For a given organism, the distri- butions of families, superfamilies and folds are nearly identical, and this is reflected in the size of the decay exponent b. Moreover, the exponent varies between different organisms, with those of smaller genomes dis- playing a steeper decay (i.e. larger b). Clearly, the power law indicates a preference to duplicate genes that encode for molecular parts which are already common. Here, we present a minimal, but biologically meaning- ful model that accurately describes the observed power law. Although the model performs equally well for all three protein classes, we focus on the occurrence of folds in preference to families and superfamilies. This is because folds are comparatively insensitive to the effects of point mutations that can cause a family member to diverge beyond detectable similarity. In the model, genomes evolve through two basic operations: (i) duplication of existing genes; (ii) net flow of new genes. The flow term is closely related to the exponent b and can accommodate consider- able gene loss; however, we demonstrate that the observed data is repro- duced best with a net inflow, i.e. with more gene gain than loss. Moreover, we show that prokaryotes have much higher rates of gene acquisition than eukaryotes, probably reflecting lateral transfer. A further natural outcome from our model is an estimation of the fold composition of the initial genome, which potentially relates to the common ancestor for modern organisms. Supplementary material pertaining to this work is available from www.partslist.org powerlaw. # 2001 Academic Press"
]
}
|
1108.0027
|
1713870197
|
Degree distribution models are incredibly important tools for analyzing and understanding the structure and formation of social networks, and can help guide the design of efficient graph algorithms. In particular, the Power-law degree distribution has long been used to model the structure of online social networks, and is the basis for algorithms and heuristics in graph applications such as influence maximization and social search. Along with recent measurement results, our interest in this topic was sparked by our own experimental results on social graphs that deviated significantly from those predicted by a Power-law model. In this work, we seek a deeper understanding of these deviations, and propose an alternative model with significant implications on graph algorithms and applications. We start by quantifying this artifact using a variety of real social graphs, and show that their structures cannot be accurately modeled using elementary distributions including the Power-law. Instead, we propose the Pareto-Lognormal (PLN) model, verify its goodness-of-fit using graphical and statistical methods, and present an analytical study of its asymptotical differences with the Power-law. To demonstrate the quantitative benefits of the PLN model, we compare the results of three wide-ranging graph applications on real social graphs against those on synthetic graphs generated using the PLN and Power-law models. We show that synthetic graphs generated using PLN are much better predictors of degree distributions in real graphs, and produce experimental results with errors that are orders-of-magnitude smaller than those produced by the Power-law model.
|
One of the first OSN study was conducted on Club Nexus website @cite_8 . Later analytical studies attracted attention for their large scale, including CyWorld, MySpace and Orkut @cite_6 , YouTube, Flickr, LiveJournal in @cite_34 , and the most recent studies of Facebook @cite_26 and Twitter @cite_36 . More recently, researchers have begun to investigate the temporal properties of OSNs @cite_0 @cite_2 @cite_11 .
|
{
"cite_N": [
"@cite_26",
"@cite_8",
"@cite_36",
"@cite_6",
"@cite_0",
"@cite_2",
"@cite_34",
"@cite_11"
],
"mid": [
"2047443612",
"2033314190",
"2101196063",
"2121761994",
"2122710250",
"2131112624",
"2115022330",
"2111708605"
],
"abstract": [
"Social networks are popular platforms for interaction, communication and collaboration between friends. Researchers have recently proposed an emerging class of applications that leverage relationships from social networks to improve security and performance in applications such as email, web browsing and overlay routing. While these applications often cite social network connectivity statistics to support their designs, researchers in psychology and sociology have repeatedly cast doubt on the practice of inferring meaningful relationships from social network connections alone. This leads to the question: Are social links valid indicators of real user interaction? If not, then how can we quantify these factors to form a more accurate model for evaluating socially-enhanced applications? In this paper, we address this question through a detailed study of user interactions in the Facebook social network. We propose the use of interaction graphs to impart meaning to online social links by quantifying user interactions. We analyze interaction graphs derived from Facebook user traces and show that they exhibit significantly lower levels of the \"small-world\" properties shown in their social graph counterparts. This means that these graphs have fewer \"supernodes\" with extremely high degree, and overall network diameter increases significantly as a result. To quantify the impact of our observations, we use both types of graphs to validate two well-known social-based applications (RE and SybilGuard). The results reveal new insights into both systems, and confirm our hypothesis that studies of social applications should use real indicators of user interactions in lieu of social graphs.",
"We present an analysis of Club Nexus, an online community at Stanford University. Through the Nexus site we were able to study a reflection of the real world community structure within the student body. We observed and measured social network phenomena such as the small world effect, clustering, and the strength of weak ties. Using the rich profile data provided by the users we were able to deduce the attributes contributing to the formation of friendships, and to determine how the similarity of users decays as the distance between them in the network increases. In addition, we found correlations between users' personalities and their other attributes, as well as interesting correspondences between how users perceive themselves and how they are perceived by others.",
"Twitter, a microblogging service less than three years old, commands more than 41 million users as of July 2009 and is growing fast. Twitter users tweet about any topic within the 140-character limit and follow others to receive their tweets. The goal of this paper is to study the topological characteristics of Twitter and its power as a new medium of information sharing. We have crawled the entire Twitter site and obtained 41.7 million user profiles, 1.47 billion social relations, 4,262 trending topics, and 106 million tweets. In its follower-following topology analysis we have found a non-power-law follower distribution, a short effective diameter, and low reciprocity, which all mark a deviation from known characteristics of human social networks [28]. In order to identify influentials on Twitter, we have ranked users by the number of followers and by PageRank and found two rankings to be similar. Ranking by retweets differs from the previous two rankings, indicating a gap in influence inferred from the number of followers and that from the popularity of one's tweets. We have analyzed the tweets of top trending topics and reported on their temporal behavior and user participation. We have classified the trending topics based on the active period and the tweets and show that the majority (over 85 ) of topics are headline news or persistent news in nature. A closer look at retweets reveals that any retweeted tweet is to reach an average of 1,000 users no matter what the number of followers is of the original tweet. Once retweeted, a tweet gets retweeted almost instantly on next hops, signifying fast diffusion of information after the 1st retweet. To the best of our knowledge this work is the first quantitative study on the entire Twittersphere and information diffusion on it.",
"Social networking services are a fast-growing business in the Internet. However, it is unknown if online relationships and their growth patterns are the same as in real-life social networks. In this paper, we compare the structures of three online social networking services: Cyworld, MySpace, and orkut, each with more than 10 million users, respectively. We have access to complete data of Cyworld's ilchon (friend) relationships and analyze its degree distribution, clustering property, degree correlation, and evolution over time. We also use Cyworld data to evaluate the validity of snowball sampling method, which we use to crawl and obtain partial network topologies of MySpace and orkut. Cyworld, the oldest of the three, demonstrates a changing scaling behavior over time in degree distribution. The latest Cyworld data's degree distribution exhibits a multi-scaling behavior, while those of MySpace and orkut have simple scaling behaviors with different exponents. Very interestingly, each of the two e ponents corresponds to the different segments in Cyworld's degree distribution. Certain online social networking services encourage online activities that cannot be easily copied in real life; we show that they deviate from close-knit online social networks which show a similar degree correlation pattern to real-life social networks.",
"In this paper, we consider the evolution of structure within large online social networks. We present a series of measurements of two such networks, together comprising in excess of five million people and ten million friendship links, annotated with metadata capturing the time of every event in the life of the network. Our measurements expose a surprising segmentation of these networks into three regions: singletons who do not participate in the network; isolated communities which overwhelmingly display star structure; and a giant component anchored by a well-connected core region which persists even in the absence of stars.We present a simple model of network growth which captures these aspects of component structure. The model follows our experimental results, characterizing users as either passive members of the network; inviters who encourage offline friends and acquaintances to migrate online; and linkers who fully participate in the social evolution of the network.",
"Online social networking sites like MySpace, Orkut, and Flickr are among the most popular sites on the Web and continue to experience dramatic growth in their user population. The popularity of these sites offers a unique opportunity to study the dynamics of social networks at scale. Having a proper understanding of how online social networks grow can provide insights into the network structure, allow predictions of future growth, and enable simulation of systems on networks of arbitrary size. However, to date, most empirical studies have focused on static network snapshots rather than growth dynamics. In this paper, we collect and examine detailed growth data from the Flickr online social network, focusing on the ways in which new links are formed. Our study makes two contributions. First, we collect detailed data covering three months of growth, encompassing 950,143 new users and over 9.7 million new links, and we make this data available to the research community. Second, we use a first-principles approach to investigate the link formation process. In short, we find that links tend to be created by users who already have many links, that users tend to respond to incoming links by creating links back to the source, and that users link to other users who are already close in the network.",
"Online social networking sites like Orkut, YouTube, and Flickr are among the most popular sites on the Internet. Users of these sites form a social network, which provides a powerful means of sharing, organizing, and finding content and contacts. The popularity of these sites provides an opportunity to study the characteristics of online social network graphs at large scale. Understanding these graphs is important, both to improve current systems and to design new applications of online social networks. This paper presents a large-scale measurement study and analysis of the structure of multiple online social networks. We examine data gathered from four popular online social networks: Flickr, YouTube, LiveJournal, and Orkut. We crawled the publicly accessible user links on each site, obtaining a large portion of each social network's graph. Our data set contains over 11.3 million users and 328 million links. We believe that this is the first study to examine multiple online social networks at scale. Our results confirm the power-law, small-world, and scale-free properties of online social networks. We observe that the indegree of user nodes tends to match the outdegree; that the networks contain a densely connected core of high-degree nodes; and that this core links small groups of strongly clustered, low-degree nodes at the fringes of the network. Finally, we discuss the implications of these structural properties for the design of social network based systems.",
"How do real graphs evolve over time? What are \"normal\" growth patterns in social, technological, and information networks? Many studies have discovered patterns in static graphs, identifying properties in a single snapshot of a large network, or in a very small number of snapshots; these include heavy tails for in- and out-degree distributions, communities, small-world phenomena, and others. However, given the lack of information about network evolution over long periods, it has been hard to convert these findings into statements about trends over time.Here we study a wide range of real graphs, and we observe some surprising phenomena. First, most of these graphs densify over time, with the number of edges growing super-linearly in the number of nodes. Second, the average distance between nodes often shrinks over time, in contrast to the conventional wisdom that such distance parameters should increase slowly as a function of the number of nodes (like O(log n) or O(log(log n)).Existing graph generation models do not exhibit these types of behavior, even at a qualitative level. We provide a new graph generator, based on a \"forest fire\" spreading process, that has a simple, intuitive justification, requires very few parameters (like the \"flammability\" of nodes), and produces graphs exhibiting the full range of properties observed both in prior work and in the present study."
]
}
|
1108.0027
|
1713870197
|
Degree distribution models are incredibly important tools for analyzing and understanding the structure and formation of social networks, and can help guide the design of efficient graph algorithms. In particular, the Power-law degree distribution has long been used to model the structure of online social networks, and is the basis for algorithms and heuristics in graph applications such as influence maximization and social search. Along with recent measurement results, our interest in this topic was sparked by our own experimental results on social graphs that deviated significantly from those predicted by a Power-law model. In this work, we seek a deeper understanding of these deviations, and propose an alternative model with significant implications on graph algorithms and applications. We start by quantifying this artifact using a variety of real social graphs, and show that their structures cannot be accurately modeled using elementary distributions including the Power-law. Instead, we propose the Pareto-Lognormal (PLN) model, verify its goodness-of-fit using graphical and statistical methods, and present an analytical study of its asymptotical differences with the Power-law. To demonstrate the quantitative benefits of the PLN model, we compare the results of three wide-ranging graph applications on real social graphs against those on synthetic graphs generated using the PLN and Power-law models. We show that synthetic graphs generated using PLN are much better predictors of degree distributions in real graphs, and produce experimental results with errors that are orders-of-magnitude smaller than those produced by the Power-law model.
|
Preliminary analysis of OSN structures in these and other studies has shown that the degree distribution does not follow a pure Power Law distribution. As a result, followup work proposed to segment these distributions and fit the segmented pieces with distinct Power-law settings @cite_15 @cite_6 .
|
{
"cite_N": [
"@cite_15",
"@cite_6"
],
"mid": [
"2137135938",
"2121761994"
],
"abstract": [
"With more than 250 million active users, Facebook (FB) is currently one of the most important online social networks. Our goal in this paper is to obtain a representative (unbiased) sample of Facebook users by crawling its social graph. In this quest, we consider and implement several candidate techniques. Two approaches that are found to perform well are the Metropolis-Hasting random walk (MHRW) and a re-weighted random walk (RWRW). Both have pros and cons, which we demonstrate through a comparison to each other as well as to the \"ground-truth\" (UNI - obtained through true uniform sampling of FB userIDs). In contrast, the traditional Breadth-First-Search (BFS) and Random Walk (RW) perform quite poorly, producing substantially biased results. In addition to offline performance assessment, we introduce online formal convergence diagnostics to assess sample quality during the data collection process. We show how these can be used to effectively determine when a random walk sample is of adequate size and quality for subsequent use (i.e., when it is safe to cease sampling). Using these methods, we collect the first, to the best of our knowledge, unbiased sample of Facebook. Finally, we use one of our representative datasets, collected through MHRW, to characterize several key properties of Facebook.",
"Social networking services are a fast-growing business in the Internet. However, it is unknown if online relationships and their growth patterns are the same as in real-life social networks. In this paper, we compare the structures of three online social networking services: Cyworld, MySpace, and orkut, each with more than 10 million users, respectively. We have access to complete data of Cyworld's ilchon (friend) relationships and analyze its degree distribution, clustering property, degree correlation, and evolution over time. We also use Cyworld data to evaluate the validity of snowball sampling method, which we use to crawl and obtain partial network topologies of MySpace and orkut. Cyworld, the oldest of the three, demonstrates a changing scaling behavior over time in degree distribution. The latest Cyworld data's degree distribution exhibits a multi-scaling behavior, while those of MySpace and orkut have simple scaling behaviors with different exponents. Very interestingly, each of the two e ponents corresponds to the different segments in Cyworld's degree distribution. Certain online social networking services encourage online activities that cannot be easily copied in real life; we show that they deviate from close-knit online social networks which show a similar degree correlation pattern to real-life social networks."
]
}
|
1108.0027
|
1713870197
|
Degree distribution models are incredibly important tools for analyzing and understanding the structure and formation of social networks, and can help guide the design of efficient graph algorithms. In particular, the Power-law degree distribution has long been used to model the structure of online social networks, and is the basis for algorithms and heuristics in graph applications such as influence maximization and social search. Along with recent measurement results, our interest in this topic was sparked by our own experimental results on social graphs that deviated significantly from those predicted by a Power-law model. In this work, we seek a deeper understanding of these deviations, and propose an alternative model with significant implications on graph algorithms and applications. We start by quantifying this artifact using a variety of real social graphs, and show that their structures cannot be accurately modeled using elementary distributions including the Power-law. Instead, we propose the Pareto-Lognormal (PLN) model, verify its goodness-of-fit using graphical and statistical methods, and present an analytical study of its asymptotical differences with the Power-law. To demonstrate the quantitative benefits of the PLN model, we compare the results of three wide-ranging graph applications on real social graphs against those on synthetic graphs generated using the PLN and Power-law models. We show that synthetic graphs generated using PLN are much better predictors of degree distributions in real graphs, and produce experimental results with errors that are orders-of-magnitude smaller than those produced by the Power-law model.
|
Social Applications and Systems. We have shown that our proposed PLN is statistically more accurate in describing OSNs than the seminal Power-law model. We believe that many social applications and protocols designed based on the Power-law assumption need to be re-evaluated, especially algorithms and protocols that rely on the population of high degree nodes or their connectivity. Examples include distributed resource replication strategies to minimize routing delay and social search, epidemics dissemination strategies to maximize information spread @cite_25 , landmark selection strategies to accurately predicts shortest paths in graphs @cite_42 , community detection to improve social recommendation systems, and social attack strategies @cite_5 .
|
{
"cite_N": [
"@cite_5",
"@cite_42",
"@cite_25"
],
"mid": [
"2063742835",
"2172107427",
"2108858998"
],
"abstract": [
"We consider a privacy threat to a social network in which the goal of an attacker is to obtain knowledge of a significant fraction of the links in the network. We formalize the typical social network interface and the information about links that it provides to its users in terms of lookahead. We consider a particular threat where an attacker subverts user accounts to get information about local neighborhoods in the network and pieces them together in order to get a global picture. We analyze, both experimentally and theoretically, the number of user accounts an attacker would need to subvert for a successful attack, as a function of his strategy for choosing users whose accounts to subvert and a function of lookahead provided by the network. We conclude that such an attack is feasible in practice, and thus any social network that wishes to protect the link privacy of its users should take great care in choosing the lookahead of its interface, limiting it to 1 or 2, whenever possible.",
"In this paper we study approximate landmark-based methods for point-to-point distance estimation in very large networks. These methods involve selecting a subset of nodes as landmarks and computing offline the distances from each node in the graph to those landmarks. At runtime, when the distance between a pair of nodes is needed, it can be estimated quickly by combining the precomputed distances. We prove that selecting the optimal set of landmarks is an NP-hard problem, and thus heuristic solutions need to be employed. We therefore explore theoretical insights to devise a variety of simple methods that scale well in very large networks. The efficiency of the suggested techniques is tested experimentally using five real-world graphs having millions of edges. While theoretical bounds support the claim that random landmarks work well in practice, our extensive experimentation shows that smart landmark selection can yield dramatically more accurate results: for a given target accuracy, our methods require as much as 250 times less space than selecting landmarks at random. In addition, we demonstrate that at a very small accuracy loss our techniques are several orders of magnitude faster than the state-of-the-art exact methods. Finally, we study an application of our methods to the task of social search in large graphs.",
"Influence maximization is the problem of finding a small subset of nodes (seed nodes) in a social network that could maximize the spread of influence. In this paper, we study the efficient influence maximization from two complementary directions. One is to improve the original greedy algorithm of [5] and its improvement [7] to further reduce its running time, and the second is to propose new degree discount heuristics that improves influence spread. We evaluate our algorithms by experiments on two large academic collaboration graphs obtained from the online archival database arXiv.org. Our experimental results show that (a) our improved greedy algorithm achieves better running time comparing with the improvement of [7] with matching influence spread, (b) our degree discount heuristics achieve much better influence spread than classic degree and centrality-based heuristics, and when tuned for a specific influence cascade model, it achieves almost matching influence thread with the greedy algorithm, and more importantly (c) the degree discount heuristics run only in milliseconds while even the improved greedy algorithms run in hours in our experiment graphs with a few tens of thousands of nodes. Based on our results, we believe that fine-tuned heuristics may provide truly scalable solutions to the influence maximization problem with satisfying influence spread and blazingly fast running time. Therefore, contrary to what implied by the conclusion of [5] that traditional heuristics are outperformed by the greedy approximation algorithm, our results shed new lights on the research of heuristic algorithms."
]
}
|
1108.0129
|
1992454251
|
Mutation rate variation across loci is well known to cause difficulties, notably identifiability issues, in the reconstruction of evolutionary trees from molecular sequences. Here we introduce a new approach for estimating general rates-across-sites models. Our results imply, in particular, that large phylogenies are typically identifiable under rate variation. We also derive sequence-length requirements for high-probability reconstruction. Our main contribution is a novel algorithm that clusters sites according to their mutation rate. Following this site clustering step, standard reconstruction techniques can be used to recover the phylogeny. Our results rely on a basic insight: that, for large trees, certain site statistics experience concentration-of-measure phenomena.
|
Most prior theoretical work on mixture models has focused on the question of identifiability . A class of phylogenetic models is identifiable if any two models in the class produce different data distributions. It is well-known that unmixed phylogenetic models are typically identifiable @cite_17 . This is not the case in general for mixtures of phylogenies. For instance, @cite_2 showed that for any two trees one can find a random scaling on each of them such that their data distributions are identical. Hence it is hopeless in general to reconstruct phylogenies under mixture models. See also @cite_8 @cite_0 @cite_26 @cite_25 @cite_3 @cite_5 for further examples of this type.
|
{
"cite_N": [
"@cite_26",
"@cite_8",
"@cite_3",
"@cite_0",
"@cite_2",
"@cite_5",
"@cite_25",
"@cite_17"
],
"mid": [
"2137668707",
"2104545328",
"",
"2131153280",
"2024076752",
"2963984870",
"",
"2057853309"
],
"abstract": [
"In this paper, we apply new geometric and combinatorial methods to the study of phylogenetic mixtures. The focus of the geometric approach is to describe the geometry of phylogenetic mixture distributions for the two state random cluster model, which is a generalization of the two state symmetric (CFN) model. In particular, we show that the set of mixture distributions forms a convex polytope and we calculate its dimension; corollaries include a simple criterion for when a mixture of branch lengths on the star tree can mimic the site pattern frequency vector of a resolved quartet tree. Furthermore, by computing volumes of polytopes we can clarify how “common” non-identifiable mixtures are under the CFN model. We also present a new combinatorial result which extends any identifiability result for a specific pair of trees of size six to arbitrary pairs of trees. Next we present a positive result showing identifiability of rates-across-sites models. Finally, we answer a question raised in a previous paper concerning “mixed branch repulsion” on trees larger than quartet trees under the CFN model.",
"The rates-across-sites assumption in phylogenetic inference posits that the rate matrix governing the Markovian evolution of a character on an edge of the putative phylogenetic tree is the product of a character-specific scale factor and a rate matrix that is particular to that edge. Thus, evolution follows basically the same process for all characters, except that it occurs faster for some characters than others. To allow estimation of tree topologies and edge lengths for such models, it is commonly assumed that the scale factors are not arbitrary unknown constants, but rather unobserved, independent, identically distributed draws from a member of some parametric family of distributions. A popular choice is the gamma family. We consider an example of a clock-like tree with three taxa, one unknown edge length, a known root state, and a parametric family of scale factor distributions that contains the gamma family. This model has the property that, for a generic choice of unknown edge length and scale factor distribution, there is another edge length and scale factor distribution which generates data with exactly the same distribution, so that even with infinitely many data it will be typically impossible to make correct inferences about the unknown edge length.",
"",
"Phylogenetic mixtures model the inhomogeneous molecular evolution commonly observed in data. The perfor- mance of phylogenetic reconstruction methods where the underlying data are generated by a mixture model has stimulated considerable recent debate. Much of the controversy stems from simulations of mixture model data on a given tree topology for which reconstruction algorithms output a tree of a different topology; these findings were held up to show the short- comings of particular tree reconstruction methods. In so doing, the underlying assumption was that mixture model data on one topology can be distinguished from data evolved on an unmixed tree of another topology given enough data and the \"correct\" method. Here we show that this assumption can be false. For biologists, our results imply that, for example, the combined data from two genes whose phylogenetic trees differ only in terms of branch lengths can perfectly fit a tree of a different topology. (Mixture model; model identifiability; phylogenetics; sequence evolution.)",
"ABSTRACT For a sequence of colors independently evolving on a tree under a simple Markov model, we consider conditions under which the tree can be uniquely recovered from the “sequence spectrum”—the expected frequencies of the various leaf colorations. This is relevant for phylogenetic analysis (where colors represent nucleotides or amino acids; leaves represent extant taxa) as the sequence spectrum is estimated directly from a collection of aligned sequences. Allowing the rate of the evolutionary process to vary across sites is an important extension over most previous studies—we show that, given suitable restrictions on the rate distribution, the true tree (up to the placement of its root) is uniquely identified by its sequence spectrum. However, if the rate distribution is unknown and arbitrary, then, for simple models, it is possible for every tree to produce the same sequence spectrum. Hence there is a logical barrier to accurate, consistent phylogenetic inference for these models when assumptions ab...",
"Distance-based approaches in phylogenetics such as Neighbor-Joining are a fast and popular approach for building trees. These methods take pairs of sequences, and from them construct a value that, in expectation, is additive under a stochastic model of site substitution. Most models assume a distribution of rates across sites, often based on a gamma distribution. Provided the (shape) parameter of this distribution is known, the method can correctly reconstruct the tree. However, if the shape parameter is not known then we show that topologically different trees, with different shape parameters and associated positive branch lengths, can lead to exactly matching distributions on pairwise site patterns between all pairs of taxa. Thus, one could not distinguish between the two trees using pairs of sequences without some prior knowledge of the shape parameter. More surprisingly, this can happen for any choice of distinct shape parameters on the two trees, and thus the result is not peculiar to a particular or contrived selection of the shape parameters. On a positive note, we point out known conditions where identifiability can be restored (namely, when the branch lengths are clocklike, or if methods such as maximum likelihood are used).",
"",
"A Markov model of evolution of characters on a phylogenetic tree consists of a tree topology together with a specification of probability transition matrices on the edges of the tree. Previous work has shown that, under mild conditions, the tree topology may be reconstructed, in the sense that the topology is identifiable from knowledge of the joint distribution of character states at pairs of terminal nodes of the tree. Also, the method of maximum likelihood is statistically consistent for inferring the tree topology. In this article we answer the analogous questions for reconstructing the full model, including the edge transition matrices. Under mild conditions, such full reconstruction is achievable, not by using pairs of terminal nodes, but rather by using triples of terminal nodes. The identifiability result generalizes previous results that were restricted either to characters having two states or to transition matrices having special structure. The proof develops matrix relationships that may be exploited to identify the model. We also use the identifiability result to prove that the method of maximum likelihood is consistent for reconstructing the full model."
]
}
|
1108.0129
|
1992454251
|
Mutation rate variation across loci is well known to cause difficulties, notably identifiability issues, in the reconstruction of evolutionary trees from molecular sequences. Here we introduce a new approach for estimating general rates-across-sites models. Our results imply, in particular, that large phylogenies are typically identifiable under rate variation. We also derive sequence-length requirements for high-probability reconstruction. Our main contribution is a novel algorithm that clusters sites according to their mutation rate. Following this site clustering step, standard reconstruction techniques can be used to recover the phylogeny. Our results rely on a basic insight: that, for large trees, certain site statistics experience concentration-of-measure phenomena.
|
Beyond the identifiability question, there seems to have been little rigorous work on reconstructing phylogenetic mixture models. One positive result is the case of the molecular clock assumption with across-sites rate variation @cite_2 , although no sequence-length requirements are provided. There is a large body of work on practical reconstruction algorithms for various types of mixtures, notably rates-across-sites models and covarion-type models, using mostly likelihood and bayesian methods. See e.g. @cite_14 for references. But the optimization problems they attempt to solve are likely NP-hard @cite_1 @cite_13 . There also exist many techniques for testing for the presence of a mixture (for example, for testing for rate heterogeneity), but such tests typically require the knowledge of the phylogeny. See e.g. @cite_16 .
|
{
"cite_N": [
"@cite_14",
"@cite_1",
"@cite_2",
"@cite_16",
"@cite_13"
],
"mid": [
"",
"2136627618",
"2024076752",
"2068117230",
"2108911240"
],
"abstract": [
"",
"Maximum likelihood (ML) is an increasingly popular optimality criterion for selecting evolutionary trees [Felsenstein 1981]. Finding optimal ML trees appears to be a very hard computational task, but for tractable cases, ML is the method of choice. In particular, algorithms and heuristics for ML take longer to run than algorithms and heuristics for the second major character based criterion, maximum parsimony (MP). However, while MP has been known to be NP-complete for over 20 years [Foulds and Graham, 1982; 1986], such a hardness result for ML has so far eluded researchers in the field.An important work by Tuffley and Steel [1997] proves quantitative relations between the parsimony values of given sequences and the corresponding log likelihood values. However, a direct application of their work would only give an exponential time reduction from MP to ML. Another step in this direction has recently been made by Addario- [2004], who proved that ancestral maximum likelihood (AML) is NP-complete. AML “lies in between” the two problems, having some properties of MP and some properties of ML. Still, the AML proof is not directly applicable to the ML problem.We resolve the question, showing that “regular” ML on phylogenetic trees is indeed intractable. Our reduction follows the vertex cover reductions for MP [ 1986] and AML [Addario- 2004], but its starting point is an approximation version of vertex cover, known as gap vc. The crux of our work is not the reduction, but its correctness proof. The proof goes through a series of tree modifications, while controlling the likelihood losses at each step, using the bounds of Tuffley and Steel [1997]. The proof can be viewed as correlating the value of any ML solution to an arbitrarily close approximation to vertex cover.",
"ABSTRACT For a sequence of colors independently evolving on a tree under a simple Markov model, we consider conditions under which the tree can be uniquely recovered from the “sequence spectrum”—the expected frequencies of the various leaf colorations. This is relevant for phylogenetic analysis (where colors represent nucleotides or amino acids; leaves represent extant taxa) as the sequence spectrum is estimated directly from a collection of aligned sequences. Allowing the rate of the evolutionary process to vary across sites is an important extension over most previous studies—we show that, given suitable restrictions on the rate distribution, the true tree (up to the placement of its root) is uniquely identified by its sequence spectrum. However, if the rate distribution is unknown and arbitrary, then, for simple models, it is possible for every tree to produce the same sequence spectrum. Hence there is a logical barrier to accurate, consistent phylogenetic inference for these models when assumptions ab...",
"The use of molecular phylogenies to examine evolutionary questions has become commonplace with the automation of DNA sequencing and the availability of efficient computer programs to perform phylogenetic analyses. The application of computer simulation and likelihood ratio tests to evolutionary hypotheses represents a recent methodological development in this field. Likelihood ratio tests have enabled biologists to address many questions in evolutionary biology that have been difficult to resolve in the past, such as whether host-parasite systems are cospeciating and whether models of DNA substitution adequately explain observed sequences.",
"Maximum likelihood is one of the most widely used techniques to infer evolutionary histories. Although it is thought to be intractable, a proof of its hardness has been lacking. Here, we give a short proof that computing the maximum likelihood tree is NP-hard by exploiting a connection between likelihood and parsimony observed by Tuffley and Steel."
]
}
|
1108.0129
|
1992454251
|
Mutation rate variation across loci is well known to cause difficulties, notably identifiability issues, in the reconstruction of evolutionary trees from molecular sequences. Here we introduce a new approach for estimating general rates-across-sites models. Our results imply, in particular, that large phylogenies are typically identifiable under rate variation. We also derive sequence-length requirements for high-probability reconstruction. Our main contribution is a novel algorithm that clusters sites according to their mutation rate. Following this site clustering step, standard reconstruction techniques can be used to recover the phylogeny. Our results rely on a basic insight: that, for large trees, certain site statistics experience concentration-of-measure phenomena.
|
The proof of our main results relies on the construction of a site clustering statistic that discriminates between different rates. A similar statistic was also used in @cite_4 in a different context. However, in contrast to @cite_4 , our main reconstruction result requires that a site clustering statistic be constructed based only on data generated by the mixture---that is, without prior knowledge of the model.
|
{
"cite_N": [
"@cite_4"
],
"mid": [
"2038969923"
],
"abstract": [
"A widely studied model for generating sequences is to evolve'' them on a tree according to a symmetric Markov process. We prove that model trees tend to be maximally far apart'' in terms of variational distance."
]
}
|
1108.0477
|
2949789221
|
Recovering a sparse signal from an undersampled set of random linear measurements is the main problem of interest in compressed sensing. In this paper, we consider the case where both the signal and the measurements are complex. We study the popular reconstruction method of @math -regularized least squares or LASSO. While several studies have shown that the LASSO algorithm offers desirable solutions under certain conditions, the precise asymptotic performance of this algorithm in the complex setting is not yet known. In this paper, we extend the approximate message passing (AMP) algorithm to the complex signals and measurements and obtain the complex approximate message passing algorithm (CAMP). We then generalize the state evolution framework recently introduced for the analysis of AMP, to the complex setting. Using the state evolution, we derive accurate formulas for the phase transition and noise sensitivity of both LASSO and CAMP.
|
The Bayesian approach that assumes a hidden Markov model for the signal has been also explored for the recovery of group sparse signals @cite_15 @cite_1 . It has been shown that AMP combined with an expectation maximization algorithm (for estimating the parameters of the distribution) leads to promising results in practice @cite_48 . @cite_70 have taken the first step towards a theoretical understanding of such algorithms. However, the complete understanding of the expectation maximization employed in such methods is not available yet. Furthermore, the success of such algorithms seem to be dependent on the match between the assumed and actual prior distribution. Such dependencies have not been theoretically analyzed yet. In this paper we assume that the distribution of non-zero coefficients is not known beforehand and characterize the performance of c-LASSO for the least favorable distribution.
|
{
"cite_N": [
"@cite_70",
"@cite_48",
"@cite_15",
"@cite_1"
],
"mid": [
"1987772002",
"2026933032",
"2132381218",
""
],
"abstract": [
"We consider the estimation of an independent and identically distributed (i.i.d.) (possibly non-Gaussian) vector x ∈ Rn from measurements y ∈ Rm obtained by a general cascade model consisting of a known linear transform followed by a probabilistic componentwise (possibly nonlinear) measurement channel. A novel method, called adaptive generalized approximate message passing (adaptive GAMP) is presented. It enables the joint learning of the statistics of the prior and measurement channel along with estimation of the unknown vector x. We prove that, for large i.i.d. Gaussian transform matrices, the asymptotic componentwise behavior of the adaptive GAMP is predicted by a simple set of scalar state evolution equations. In addition, we show that the adaptive GAMP yields asymptotically consistent parameter estimates, when a certain maximum-likelihood estimation can be performed in each step. This implies that the algorithm achieves a reconstruction quality equivalent to the oracle algorithm that knows the correct parameter values. Remarkably, this result applies to essentially arbitrary parametrizations of the unknown distributions, including nonlinear and non-Gaussian ones. The adaptive GAMP methodology thus provides a systematic, general and computationally efficient method applicable to a large range of linear-nonlinear models with provable guarantees.",
"When recovering a sparse signal from noisy compressive linear measurements, the distribution of the signal's non-zero coefficients can have a profound effect on recovery mean-squared error (MSE). If this distribution was a priori known, then one could use computationally efficient approximate message passing (AMP) techniques for nearly minimum MSE (MMSE) recovery. In practice, however, the distribution is unknown, motivating the use of robust algorithms like LASSO-which is nearly minimax optimal-at the cost of significantly larger MSE for non-least-favorable distributions. As an alternative, we propose an empirical-Bayesian technique that simultaneously learns the signal distribution while MMSE-recovering the signal-according to the learned distribution-using AMP. In particular, we model the non-zero distribution as a Gaussian mixture and learn its parameters through expectation maximization, using AMP to implement the expectation step. Numerical experiments on a wide range of signal classes confirm the state-of-the-art performance of our approach, in both reconstruction error and runtime, in the high-dimensional regime, for most (but not all) sensing operators.",
"We propose a factor-graph-based approach to joint channel-estimation-and-decoding (JCED) of bit-interleaved coded orthogonal frequency division multiplexing (BICM-OFDM). In contrast to existing designs, ours is capable of exploiting not only sparsity in sampled channel taps but also clustering among the large taps, behaviors which are known to manifest at larger communication bandwidths. In order to exploit these channel-tap structures, we adopt a two-state Gaussian mixture prior in conjunction with a Markov model on the hidden state. For loopy belief propagation, we exploit a “generalized approximate message passing” (GAMP) algorithm recently developed in the context of compressed sensing, and show that it can be successfully coupled with soft-input soft-output decoding, as well as hidden Markov inference, through the standard sum-product framework. For N subcarriers and any channel length L<;N, the resulting JCED-GAMP scheme has a computational complexity of only O(N log2 N +N|S|), where |S| is the constellation size. Numerical experiments using IEEE 802.15.4a channels show that our scheme yields BER performance within 1 dB of the known-channel bound and 3-4 dB better than soft equalization based on LMMSE and LASSO.",
""
]
}
|
1108.0477
|
2949789221
|
Recovering a sparse signal from an undersampled set of random linear measurements is the main problem of interest in compressed sensing. In this paper, we consider the case where both the signal and the measurements are complex. We study the popular reconstruction method of @math -regularized least squares or LASSO. While several studies have shown that the LASSO algorithm offers desirable solutions under certain conditions, the precise asymptotic performance of this algorithm in the complex setting is not yet known. In this paper, we extend the approximate message passing (AMP) algorithm to the complex signals and measurements and obtain the complex approximate message passing algorithm (CAMP). We then generalize the state evolution framework recently introduced for the analysis of AMP, to the complex setting. Using the state evolution, we derive accurate formulas for the phase transition and noise sensitivity of both LASSO and CAMP.
|
While writing this paper we were made aware that in an independent work Donoho, Johnstone, and Montanari are extending the SE framework to the general setting of group sparsity @cite_77 . Their work considers the state evolution framework for the group-LASSO problem and will include the generalization of the analysis provided in this paper to the case where the variables tend to cluster in groups of size @math .
|
{
"cite_N": [
"@cite_77"
],
"mid": [
"2103539935"
],
"abstract": [
"Compressed sensing posits that, within limits, one can undersample a sparse signal and yet reconstruct it accurately. Knowing the precise limits to such undersampling is important both for theory and practice. We present a formula that characterizes the allowed undersampling of generalized sparse objects. The formula applies to approximate message passing (AMP) algorithms for compressed sensing, which are here generalized to employ denoising operators besides the traditional scalar soft thresholding denoiser. This paper gives several examples including scalar denoisers not derived from convex penalization-the firm shrinkage nonlinearity and the minimax nonlinearity-and also nonscalar denoisers-block thresholding, monotone regression, and total variation minimization. Let the variables e = k N and δ = n N denote the generalized sparsity and undersampling fractions for sampling the k-generalized-sparse N-vector x0 according to y=Ax0. Here, A is an n×N measurement matrix whose entries are iid standard Gaussian. The formula states that the phase transition curve δ = δ(e) separating successful from unsuccessful reconstruction of x0 by AMP is given by δ = M(e|Denoiser) where M(e|Denoiser) denotes the per-coordinate minimax mean squared error (MSE) of the specified, optimally tuned denoiser in the directly observed problem y = x + z. In short, the phase transition of a noiseless undersampling problem is identical to the minimax MSE in a denoising problem. We prove that this formula follows from state evolution and present numerical results validating it in a wide range of settings. The above formula generates numerous new insights, both in the scalar and in the nonscalar cases."
]
}
|
1108.0477
|
2949789221
|
Recovering a sparse signal from an undersampled set of random linear measurements is the main problem of interest in compressed sensing. In this paper, we consider the case where both the signal and the measurements are complex. We study the popular reconstruction method of @math -regularized least squares or LASSO. While several studies have shown that the LASSO algorithm offers desirable solutions under certain conditions, the precise asymptotic performance of this algorithm in the complex setting is not yet known. In this paper, we extend the approximate message passing (AMP) algorithm to the complex signals and measurements and obtain the complex approximate message passing algorithm (CAMP). We then generalize the state evolution framework recently introduced for the analysis of AMP, to the complex setting. Using the state evolution, we derive accurate formulas for the phase transition and noise sensitivity of both LASSO and CAMP.
|
Both complex signals and group-sparse signals are special cases of model-based CS @cite_21 . By introducing more structured models for the signal, @cite_21 proves that the number of measurements needed are proportional to the complexity'' of the model rather than the sparsity level @cite_23 . The results in model-based CS also suffer from loose constants in both the number of measurements and the mean square error bounds.
|
{
"cite_N": [
"@cite_21",
"@cite_23"
],
"mid": [
"2125680629",
"2023118766"
],
"abstract": [
"Compressive sensing (CS) is an alternative to Shannon Nyquist sampling for the acquisition of sparse or compressible signals that can be well approximated by just K ? N elements from an N -dimensional basis. Instead of taking periodic samples, CS measures inner products with M < N random vectors and then recovers the signal via a sparsity-seeking optimization or greedy algorithm. Standard CS dictates that robust signal recovery is possible from M = O(K log(N K)) measurements. It is possible to substantially decrease M without sacrificing robustness by leveraging more realistic signal models that go beyond simple sparsity and compressibility by including structural dependencies between the values and locations of the signal coefficients. This paper introduces a model-based CS theory that parallels the conventional theory and provides concrete guidelines on how to create model-based recovery algorithms with provable performance guarantees. A highlight is the introduction of a new class of structured compressible signals along with a new sufficient condition for robust structured compressible signal recovery that we dub the restricted amplification property, which is the natural counterpart to the restricted isometry property of conventional CS. Two examples integrate two relevant signal models-wavelet trees and block sparsity-into two state-of-the-art CS recovery algorithms and prove that they offer robust recovery from just M = O(K) measurements. Extensive numerical simulations demonstrate the validity and applicability of our new theory and algorithms.",
"The fast growing field of compressed sensing is founded on the fact that if a signal is 'simple' and has some 'structure', then it can be reconstructed accurately with far fewer samples than its ambient dimension. Many different plausible structures have been explored in this field, ranging from sparsity to low-rankness and to finite rate of innovation. However, there are important abstract questions that are yet to be answered. For instance, what are the general abstract meanings of 'structure' and 'simplicity'? Do there exist universal algorithms for recovering such simple structured objects from fewer samples than their ambient dimension? In this paper, we aim to address these two questions. Using algorithmic information theory tools such as Kolmogorov complexity, we provide a unified method of describing 'simplicity' and 'structure'. We then explore the performance of an algorithm motivated by Ocam's Razor (called MCP for minimum complexity pursuit) and show that it requires @math number of samples to recover a signal, where @math and @math represent its complexity and ambient dimension, respectively. Finally, we discuss more general classes of signals and provide guarantees on the performance of MCP."
]
}
|
1108.0072
|
2953312955
|
We study the scaling properties of a georouting scheme in a wireless multi-hop network of @math mobile nodes. Our aim is to increase the network capacity quasi linearly with @math while keeping the average delay bounded. In our model, mobile nodes move according to an i.i.d. random walk with velocity @math and transmit packets to randomly chosen destinations. The average packet delivery delay of our scheme is of order @math and it achieves the network capacity of order @math . This shows a practical throughput-delay trade-off, in particular when compared with the seminal result of Gupta and Kumar which shows network capacity of order @math and negligible delay and the groundbreaking result of Grossglausser and Tse which achieves network capacity of order @math but with an average delay of order @math . We confirm the generality of our analytical results using simulations under various interference models.
|
In the context of @cite_15 , the number of relays a packet has to traverse to reach its destination is @math . Consequently, @math must be divided by @math to get the useful capacity: @math . In order to ensure connectivity in the network, so that every source is able to communicate with its randomly chosen destination, @math must satisfy the limit @math . This leads to Gupta and Kumar's maximum capacity of @math with hot potatoes'' routing.
|
{
"cite_N": [
"@cite_15"
],
"mid": [
"2137775453"
],
"abstract": [
"When n identical randomly located nodes, each capable of transmitting at W bits per second and using a fixed range, form a wireless network, the throughput spl lambda (n) obtainable by each node for a randomly chosen destination is spl Theta (W spl radic (nlogn)) bits per second under a noninterference protocol. If the nodes are optimally placed in a disk of unit area, traffic patterns are optimally assigned, and each transmission's range is optimally chosen, the bit-distance product that can be transported by the network per second is spl Theta (W spl radic An) bit-meters per second. Thus even under optimal circumstances, the throughput is only spl Theta (W spl radic n) bits per second for each node for a destination nonvanishingly far away. Similar results also hold under an alternate physical model where a required signal-to-interference ratio is specified for successful receptions. Fundamentally, it is the need for every node all over the domain to share whatever portion of the channel it is utilizing with nodes in its local neighborhood that is the reason for the constriction in capacity. Splitting the channel into several subchannels does not change any of the results. Some implications may be worth considering by designers. Since the throughput furnished to each user diminishes to zero as the number of users is increased, perhaps networks connecting smaller numbers of users, or featuring connections mostly with nearby neighbors, may be more likely to be find acceptance."
]
}
|
1108.0072
|
2953312955
|
We study the scaling properties of a georouting scheme in a wireless multi-hop network of @math mobile nodes. Our aim is to increase the network capacity quasi linearly with @math while keeping the average delay bounded. In our model, mobile nodes move according to an i.i.d. random walk with velocity @math and transmit packets to randomly chosen destinations. The average packet delivery delay of our scheme is of order @math and it achieves the network capacity of order @math . This shows a practical throughput-delay trade-off, in particular when compared with the seminal result of Gupta and Kumar which shows network capacity of order @math and negligible delay and the groundbreaking result of Grossglausser and Tse which achieves network capacity of order @math but with an average delay of order @math . We confirm the generality of our analytical results using simulations under various interference models.
|
On the practical side, many protocols have been proposed for wireless multi-hop networks. These protocols may be classified in topology-based and position-based protocols. Topology-based protocols @cite_6 @cite_8 @cite_18 need to maintain information on routes potentially or currently in use, so they do not work effectively in environments with high frequency of topology changes. For this reason, there has been an increasing interest in position-based routing protocols. In these protocols, a node needs to know its own position, the one-hop neighbors' positions, and the destination node's position. These protocols do not need control packets to maintain link states or to update routing tables. Examples of such protocols can be found in @cite_10 @cite_1 @cite_16 @cite_9 @cite_12 @cite_14 @cite_2 @cite_4 . In contrast to our work, they do not analyze the trade-off between the capacity and the delay of the network under these protocols and their scaling properties.
|
{
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_4",
"@cite_8",
"@cite_9",
"@cite_1",
"@cite_6",
"@cite_2",
"@cite_16",
"@cite_10",
"@cite_12"
],
"mid": [
"2102258543",
"2131205845",
"2151112946",
"1495606459",
"",
"",
"1549535141",
"",
"2119710605",
"1969043696",
"2148565706"
],
"abstract": [
"The Ad hoc On-Demand Distance Vector (AODV) routing protocol is intended for use by mobile nodes in an ad hoc network. It offers quick adaptation to dynamic link conditions, low processing and memory overhead, low network utilization, and determines unicast routes to destinations within the ad hoc network. It uses destination sequence numbers to ensure loop freedom at all times (even in the face of anomalous delivery of routing control messages), avoiding problems (such as \"counting to infinity\") associated with classical distance vector protocols.",
"This paper presents a model for analyzing the performance of transmission strategies in a multihop packet radio network where each station has adjustable transmission radius. A larger transmission radius will increase the probability of finding a receiver in the desired direction and contribute bigger progress if the transmission is successful, but it also has a higher probability of collision with other transmissions. The converse is true for shorter transmission range. We illustrate our model by comparing three transmission strategies. Our results show that the network can achieve better performance by suitably controlling the transmission range. One of the transmission strategies, namely transmitting to the nearest forward neighbor by using adjustable transmission power, has desirable features in a high terminal density environment.",
"In this paper we determine throughput equations for a packet radio network where terminals are randomly distributed on the plane, are able to capture transmitted signals, and use slotted ALOHA to access the channel. We find that the throughput of the network is a strictly increasing function of the receiver's ability to capture signals, and depends on the transmission range of the terminals and their probability of transmitting packets. Under ideal circumstances, we show the expected fraction of terminals in the network that are engaged in successful traffic in any slot does not exceed 21 percent.",
"",
"",
"",
"This document describes the Optimized Link State Routing (OLSR) protocol for mobile ad hoc networks. The protocol is an optimization of the classical link state algorithm tailored to the requirements of a mobile wireless LAN. The key concept used in the protocol is that of multipoint relays (MPRs). MPRs are selected nodes which forward broadcast messages during the flooding process. This technique substantially reduces the message overhead as compared to a classical flooding mechanism, where every node retransmits each message when it receives the first copy of the message. In OLSR, link state information is generated only by nodes elected as MPRs. Thus, a second optimization is achieved by minimizing the number of control messages flooded in the network. As a third optimization, an MPR node may chose to report only links between itself and its MPR selectors. Hence, as contrary to the classic link state algorithm, partial link state information is distributed in the network. This information is then used for route calculation. OLSR provides optimal routes (in terms of number of hops). The protocol is particularly suitable for large and dense networks as the technique of MPRs works well in this context.",
"",
"A mobile ad hoc network consists of wireless hosts that may move often. Movement of hosts results in a change in routes, requiring some mechanism for determining new routes. Several routing protocols have already been proposed for ad hoc networks. This report suggests an approach to utilize location information (for instance, obtained using the global positioning system) to improve performance of routing protocols for ad hoc networks.",
"",
"In multihop packet radio networks with randomly distributed terminals, the optimal transmission radii to maximize the expected progress of packets in desired directions are determined with a variety of transmission protocols and network configurations. It is shown that the FM capture phenomenon with slotted ALOHA greatly improves the expected progress over the system without capture due to the more limited area of possibly interfering terminals around the receiver. The (mini)slotted nonpersistent carrier-sense-multiple-access (CSMA) only slightly outperforms ALOHA, unlike the single-hop case (where a large improvement is available), because of a large area of \"hidden\" terminals and the long vulnerable period generated by them. As an example of an inhomogeneous terminal distribution, the effect of a gap in an otherwise randomly distributed terminal population on the expected progress of packets crossing the gap is considered. In this case, the disadvantage of using a large transmission radius is demonstrated."
]
}
|
1107.5543
|
2952543321
|
As individuals communicate, their exchanges form a dynamic network. We demonstrate, using time series analysis of communication in three online settings, that network structure alone can be highly revealing of the diversity and novelty of the information being communicated. Our approach uses both standard and novel network metrics to characterize how unexpected a network configuration is, and to capture a network's ability to conduct information. We find that networks with a higher conductance in link structure exhibit higher information entropy, while unexpected network configurations can be tied to information novelty. We use a simulation model to explain the observed correspondence between the evolution of a network's structure and the information it carries.
|
The dynamic nature of web content has been of interest because of its implications for search and retrieval @cite_21 @cite_12 . In particular, changes in content and link structure can be used to find trending content. @cite_10 used bursts in appearance of connections between entities in text to detect events. Although they related network structure to properties of content, the network was generated from entities within the content itself. @cite_18 analyzed a joint model of network and topic evolution to track and predict topic popularity, but did not explicitly examine network structure. Two other studies used aggregate volume of interactions in social media to infer the evolution of content, but did not explicitly examine the networks' structure. @cite_13 used the volume of interactions between nodes in social media, along with other variables, to identify attention gathering items early on in their lifecycle. In a non-Web context, @cite_25 showed that the communication volume among stock traders correlated with synchrony in their trading behavior and correspondingly with profits.
|
{
"cite_N": [
"@cite_18",
"@cite_10",
"@cite_21",
"@cite_13",
"@cite_25",
"@cite_12"
],
"mid": [
"2056797132",
"2041651443",
"2157748587",
"",
"",
"2148931201"
],
"abstract": [
"User generated information in online communities has been characterized with the mixture of a text stream and a network structure both changing over time. A good example is a web-blogging community with the daily blog posts and a social network of bloggers. An important task of analyzing an online community is to observe and track the popular events, or topics that evolve over time in the community. Existing approaches usually focus on either the burstiness of topics or the evolution of networks, but ignoring the interplay between textual topics and network structures. In this paper, we formally define the problem of popular event tracking in online communities (PET), focusing on the interplay between texts and networks. We propose a novel statistical method that models the the popularity of events over time, taking into consideration the burstiness of user interest, information diffusion on the network structure, and the evolution of textual topics. Specifically, a Gibbs Random Field is defined to model the influence of historic status and the dependency relationships in the graph; thereafter a topic model generates the words in text content of the event, regularized by the Gibbs Random Field. We prove that two classic models in information diffusion and text burstiness are special cases of our model under certain situations. Empirical experiments with two different communities and datasets (i.e., Twitter and DBLP) show that our approach is effective and outperforms existing approaches.",
"This paper studies the problem of dynamic relationship and event discovery. A large body of previous work on relation extraction focuses on discovering predefined and static relationships between entities. In contrast, we aim to identify temporally defined (e.g., co-bursting) relationships that are not predefined by an existing schema, and we identify the underlying time constrained events that lead to these relationships. The key challenges in identifying such events include discovering and verifying dynamic connections among entities, and consolidating binary dynamic connections into events consisting of a set of entities that are connected at a given time period. We formalize this problem and introduce an efficient end-to-end pipeline as a solution. In particular, we introduce two formal notions, global temporal constraint cluster and local temporal constraint cluster, for detecting dynamic events. We further design efficient algorithms for discovering such events from a large graph of dynamic relationships. Finally, detailed experiments on real data show the effectiveness of our proposed solution.",
"We seek to gain improved insight into how Web search engines shouldcope with the evolving Web, in an attempt to provide users with themost up-to-date results possible. For this purpose we collectedweekly snapshots of some 150 Web sites over the course of one year,and measured the evolution of content and link structure. Our measurements focus on aspects of potential interest to search engine designers: the evolution of link structure over time, the rate ofcreation of new pages and new distinct content on the Web, and the rate of change of the content of existing pages under search-centric measures of degree of change.Our findings indicate a rapid turnover rate of Web pages, i.e.,high rates of birth and death, coupled with an even higher rate ofturnover in the hyperlinks that connect them. For pages that persistover time we found that, perhaps surprisingly, the degree of contentshift as measured using TF.IDF cosine distance does not appear to beconsistently correlated with the frequency of contentupdating. Despite this apparent non-correlation, the rate of content shift of a given page is likely to remain consistent over time. That is, pages that change a great deal in one week will likely change by a similarly large degree in the following week. Conversely, pages that experience little change will continue to experience little change. We conclude the paper with a discussion of the potential implications ofour results for the design of effective Web search engines.",
"",
"",
"The Web is a dynamic, ever changing collection of information. This paper explores changes in Web content by analyzing a crawl of 55,000 Web pages, selected to represent different user visitation patterns. Although change over long intervals has been explored on random (and potentially unvisited) samples of Web pages, little is known about the nature of finer grained changes to pages that are actively consumed by users, such as those in our sample. We describe algorithms, analyses, and models for characterizing changes in Web content, focusing on both time (by using hourly and sub-hourly crawls) and structure (by looking at page-, DOM-, and term-level changes). Change rates are higher in our behavior-based sample than found in previous work on randomly sampled pages, with a large portion of pages changing more than hourly. Detailed content and structure analyses identify stable and dynamic content within each page. The understanding of Web change we develop in this paper has implications for tools designed to help people interact with dynamic Web content, such as search engines, advertising, and Web browsers."
]
}
|
1107.5543
|
2952543321
|
As individuals communicate, their exchanges form a dynamic network. We demonstrate, using time series analysis of communication in three online settings, that network structure alone can be highly revealing of the diversity and novelty of the information being communicated. Our approach uses both standard and novel network metrics to characterize how unexpected a network configuration is, and to capture a network's ability to conduct information. We find that networks with a higher conductance in link structure exhibit higher information entropy, while unexpected network configurations can be tied to information novelty. We use a simulation model to explain the observed correspondence between the evolution of a network's structure and the information it carries.
|
Prior work that has examined time-evolving network structure explicitly has shown that changes in a network's structure can be reflective of events and trends. In the Graphscope project @cite_8 , changes in community structure of email communication networks were tied to events within a company. @cite_4 showed that breaks in supreme court citation patterns corresponded to changes in the court's ideology. @cite_14 correlated time series of networks of traders trading in commodity futures contracts, with financial variables related to the trades, such as returns, volatility and duration. However, the notion of information entering into the market was implicit, in the sense that the prices and quantities of contracts traded reflected the information the traders held about the future value of the contract. In contrast, in this paper we explicitly analyze the information that is being directly communicated with each activation of an edge.
|
{
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_8"
],
"mid": [
"2110355504",
"1996739860",
"2155640700"
],
"abstract": [
"We use network analysis to quantify the flow of information through financial markets. Using unique ultra high frequency data, we compute network and financial variables for transactions that occured during August 2008 in the nearby E-mini S&P 500 futures contract ‐ the cornerstone of price discovery for the S&P 500 Index. We find that network variables presage the information represented by financial variables. Most notably, we find that network variables strongly Granger-case intertrade duration and trading volume, suggesting that network metrics serve as primitive measures of information flow. Finally, we find that the dynamics of returns and volatility are rooted in the network mechanics of the information arrival process ‐ as evidenced both in our data and the results of an agent-based simulation model.",
"In this paper we examine a number of methods for probing and understanding the large-scale structure of networks that evolve over time. We focus in particular on citation networks, networks of references between documents such as papers, patents, or court cases. We describe three different methods of analysis, one based on an expectation-maximization algorithm, one based on modularity optimization, and one based on eigenvector centrality. Using the network of citations between opinions of the United States Supreme Court as an example, we demonstrate how each of these methods can reveal significant structural divisions in the network and how, ultimately, the combination of all three can help us develop a coherent overall picture of the network's shape.",
"How can we find communities in dynamic networks of socialinteractions, such as who calls whom, who emails whom, or who sells to whom? How can we spot discontinuity time-points in such streams of graphs, in an on-line, any-time fashion? We propose GraphScope, that addresses both problems, using information theoretic principles. Contrary to the majority of earlier methods, it needs no user-defined parameters. Moreover, it is designed to operate on large graphs, in a streaming fashion. We demonstrate the efficiency and effectiveness of our GraphScope on real datasets from several diverse domains. In all cases it produces meaningful time-evolving patterns that agree with human intuition."
]
}
|
1107.5924
|
1511590296
|
In this paper a novel computational technique for finite discrete approximation of continuous dynamical systems suitable for a significant class of biochemical dynamical systems is introduced. The method is parameterized in order to affect the imposed level of approximation provided that with increasing parameter value the approximation converges to the original continuous system. By employing this approximation technique, we present algorithms solving the reachability problem for biochemical dynamical systems. The presented method and algorithms are evaluated on several exemplary biological models and on a real case study. This is a full version of the paper published in the proceedings of CompMod 2011.
|
Discrete approximation methods are commonly used in continuous and hybrid systems analysis (see @cite_19 for an overview regarding reachability) to handle the uncountability of the state space. Direct methods work on the original system and rely on a successor operation iteratively computing the reachable set whereas indirect methods abstract from the continuous model by a finite structure for which the analysis is simpler. Our method belongs to the latter class, since it uses numerical simulations and creates the abstraction automaton. Considering a fixed set of initial conditions, there is a certain overhead with generating states of the automaton in comparison with simple numerical simulations. However, the advantage of constructing the automaton is obtaining a global view of the dynamics. Moreover, in addition to rectangular abstraction, the automaton is augmented with weighted transitions which represent quantitative information describing volumes of subsets of initial conditions belonging to attraction basins of different parts of the phase space.
|
{
"cite_N": [
"@cite_19"
],
"mid": [
"2110910849"
],
"abstract": [
"Set-based reachability analysis computes all possible states a system may attain, and in this sense provides knowledge about the system with a completeness, or coverage, that a finite number of simulation runs can not deliver. Due to its inherent complexity, the application of reachability analysis has been limited so far to simple systems, both in the continuous and the hybrid domain. In this paper we present recent advances that, in combination, significantly improve this applicability, and allow us to find better balance between computational cost and accuracy. The presentation covers, in a unified manner, a variety of methods handling increasingly complex types of continuous dynamics (constant derivative, linear, nonlinear). The improvements include new geometrical objects for representing sets, new approximation schemes, and more flexible combinations of graph-search algorithm and partition refinement. We report briefly some preliminary experiments that have enabled the analysis of systems previously beyond reach."
]
}
|
1107.5924
|
1511590296
|
In this paper a novel computational technique for finite discrete approximation of continuous dynamical systems suitable for a significant class of biochemical dynamical systems is introduced. The method is parameterized in order to affect the imposed level of approximation provided that with increasing parameter value the approximation converges to the original continuous system. By employing this approximation technique, we present algorithms solving the reachability problem for biochemical dynamical systems. The presented method and algorithms are evaluated on several exemplary biological models and on a real case study. This is a full version of the paper published in the proceedings of CompMod 2011.
|
An indirect method based on rectangular abstraction automaton making the finite quotient of the continuous state space has been employed, e.g., in @cite_24 @cite_29 @cite_18 . In general, these methods rely on results @cite_5 @cite_22 and are applicable to (piece-wise) affine or (piece-wise) multi-affine systems. Although not addressed formally in this paper, our technique can be considered as a refinement of @cite_24 . However, we focus on obtaining satisfactory approximate results eliminating the extent of spurious behaviour coming from conservativeness of rectangular abstraction. Our technique can be employed for the recognition of spurious behaviour of the rectangular abstraction transition system.
|
{
"cite_N": [
"@cite_18",
"@cite_22",
"@cite_29",
"@cite_24",
"@cite_5"
],
"mid": [
"2049491928",
"2147970103",
"2171042116",
"",
"2114578185"
],
"abstract": [
"We use hybrid-systems techniques for the analysis of reachability properties of a class of piecewise-affine (PA) differential equations that are particularly suitable for the modeling of genetic regulatory networks. More specifically, we introduce a hyperrectangular partition of the state space that forms the basis for a discrete abstraction preserving the sign of the derivatives of the state variables. The resulting discrete transition system provides a qualitative description of the network dynamics that is well-adapted to available experimental data and that can be efficiently computed in a symbolic manner from inequality constraints on the parameters.",
"Given an affine system on a full-dimensional polytope, the problem of reaching a particular facet of the polytope, using continuous piecewise-affine state feedback is studied. Necessary conditions and sufficient conditions for the existence of a solution are derived in terms of linear inequalities on the input vectors at the vertices of the polytope. Special attention is paid to affine systems on full-dimensional simplices. In this case, the necessary and sufficient conditions are equivalent and a constructive procedure yields an affine feedback control law, that solves the reachability problem under consideration.",
"We propose an abstraction method for medium-scale biomolecular networks, based on hybrid dynamical systems with continuous multi-affine dynamics. This abstraction method follows naturally from the notion of approximating nonlinear rate laws with continuous piecewise linear functions and can be easily automated. An efficient reachability algorithm is possible for the resulting class of hybrid systems. An efficient reachability algorithm is possible for the resulting class of hybrid systems. An approximation for an ordinary differential equation model of the lac operon is constructed, and it is shown that the abstraction passes the same experimental tests as were used to validate the original model. The well studied biological system exhibits bistability and switching behaviour, arising from positive feedback in the expression mechanism of the lac operon. The switching property of the lac system is an example of the major qualitative features that are the building blocks of higher level, more coarse-grained descriptions. The present approach is useful in helping to correctly identify such properties and in connecting them to the underlying molecular dynamical details. Reachability analysis together with the knowledge of the steady-state structure are used to identify ranges of parameter values for which the system maintains the bistable switching property.",
"",
"In this paper, we focus on a particular class of nonlinear affine control systems of the form xdot=f(x)+Bu, where the drift f is a multi-affine vector field (i.e., affine in each state component), the control distribution B is constant, and the control u is constrained to a convex set. For such a system, we first derive necessary and sufficient conditions for the existence of a multiaffine feedback control law keeping the system in a rectangular invariant. We then derive sufficient conditions for driving all initial states in a rectangle through a desired facet in finite time. If the control constraints are polyhedral, we show that all these conditions translate to checking the feasibility of systems of linear inequalities to be satisfied by the control at the vertices of the state rectangle. This work is motivated by the need to construct discrete abstractions for continuous and hybrid systems, in which analysis and control tasks specified in terms of reachability of sets of states can be reduced to searches on finite graphs. We show the application of our results to the problem of controlling the angular velocity of an aircraft with gas jet actuators"
]
}
|
1107.5924
|
1511590296
|
In this paper a novel computational technique for finite discrete approximation of continuous dynamical systems suitable for a significant class of biochemical dynamical systems is introduced. The method is parameterized in order to affect the imposed level of approximation provided that with increasing parameter value the approximation converges to the original continuous system. By employing this approximation technique, we present algorithms solving the reachability problem for biochemical dynamical systems. The presented method and algorithms are evaluated on several exemplary biological models and on a real case study. This is a full version of the paper published in the proceedings of CompMod 2011.
|
Direct methods are mostly based on hybridization realized by partitioning the system state space into domains where the local continuous behaviour is linearized @cite_14 . This method, in an improved form, has been applied to non-linear biochemical dynamical systems @cite_7 . In general, direct methods give good results for low-dimensional systems and small initial sets. In comparison with indirect approaches, they are computationally harder. From this viewpoint, our approach lies between both extremes.
|
{
"cite_N": [
"@cite_14",
"@cite_7"
],
"mid": [
"2154679417",
"1606658314"
],
"abstract": [
"In this article, we describe some recent results on the hybridization methods for the analysis of nonlinear systems. The main idea of our hybridization approach is to apply the hybrid systems methodology as a systematic approximation method. More concretely, we partition the state space of a complex system into regions that only intersect on their boundaries, and then approximate its dynamics in each region by a simpler one. Then, the resulting hybrid system, which we call a hybridization, is used to yield approximate analysis results for the original system. We also prove important properties of the hybridization, and propose two effective hybridization construction methods, which allow approximating the original nonlinear system with a good convergence rate.",
"In this paper we describe reachability computation for continuous and hybrid systems and its potential contribution to the process of building and debugging biological models. We then develop a novel algorithm for computing reachable states for nonlinear systems and report experimental results obtained using a prototype implementation. We believe these results constitute a promising contribution to the analysis of complex models of biological systems."
]
}
|
1107.5468
|
2952299844
|
Wireless 802.11 links operate in unlicensed spectrum and so must accommodate other unlicensed transmitters which generate pulsed interference. We propose a new approach for detecting the presence of pulsed interference affecting 802.11 links, and for estimating temporal statistics of this interference. This approach builds on recent work on distinguishing collision losses from noise losses in 802.11 links. When the intervals between interference pulses are i.i.d., the approach is not confined to estimating the mean and variance of these intervals but can recover the complete probability distribution. The approach is a transmitter-side technique that provides per-link information and is compatible with standard hardware. We demonstrate the effectiveness of the proposed approach using extensive experimental measurements. In addition to applications to monitoring, management and diagnostics, the fundamental information provided by our approach can potentially be used to adapt the frame durations used in a network so as to increase capacity in the presence of pulsed interference.
|
MAC approaches make up some of the most popular and earliest rate control algorithms. Techniques such as ARF @cite_24 , RBAR @cite_12 and RRAA @cite_8 attempt to use frame transmission successes and failures as a means to indirectly measure channel conditions. However, these techniques cannot distinguish between noise, collision, or hidden noise sources of error. In @cite_4 , rate control via loss differentiation is suggested via a modified ARF algorithm; it was shown to greatly improve performance via the inclusion of a NAK signal, but this requires a modification to the 802.11 MAC. Use of RTS CTS signals has been proposed for distinguishing collisions from channel noise losses, e.g. @cite_15 @cite_0 . However, such approaches can perform poorly in the presence of pulsed interference such as hidden terminals @cite_3 .
|
{
"cite_N": [
"@cite_4",
"@cite_8",
"@cite_3",
"@cite_24",
"@cite_0",
"@cite_15",
"@cite_12"
],
"mid": [
"2123446478",
"2161822490",
"2147189535",
"2141273556",
"",
"2157412826",
"2003531160"
],
"abstract": [
"In a WLAN subject to variable wireless channel conditions, rate adaptation plays an important role to more efficiently utilize the physical link. However, the existing rate adaptation algorithms for IEEE 802.11 WLANs do not take into account the loss of frames due to collisions. In a WLAN with coexistence of multiple stations, two types of frame losses due to (a) link errors and (b) collisions over the wireless link can coexist and severely degrade the performance of the existing rate adaptation algorithms. In this paper, we propose a new automatic rate fallback algorithm that can differentiate the two types of losses and sharpen the accuracy of the rate adaptation process. Numerical results show that the new algorithm can substantially improve the performance of IEEE 802.11 WLANs.",
"Rate adaptation is a mechanism unspecified by the 802.11 standards, yet critical to the system performance by exploiting the multi-rate capability at the physical layer.I n this paper, we conduct a systematic and experimental study on rate adaptation over 802.11 wireless networks. Our main contributions are two-fold. First, we critique five design guidelines adopted by most existing algorithms. Our study reveals that these seemingly correct guidelines can be misleading in practice, thus incur significant performance penalty in certain scenarios. The fundamental challenge is that rate adaptation must accurately estimate the channel condition despite the presence of various dynamics caused by fading, mobility and hidden terminals. Second, we design and implement a new Robust Rate Adaptation Algorithm (RRAA)that addresses the above challenge. RRAA uses short-term loss ratio to opportunistically guide its rate change decisions, and an adaptive RTS filter to prevent collision losses from triggering rate decrease. Our extensive experiments have shown that RRAA outperforms three well-known rate adaptation solutions (ARF, AARF, and SampleRate) in all tested scenarios, with throughput improvement up to 143 .",
"We propose a powerful MAC PHY cross-layer approach to measuring IEEE 802.11 transmission opportunities in WLAN networks on a per-link basis. Our estimator can operate at a single station and it is able to: 1) classify losses caused by noise, collisions, and hidden nodes; and 2) distinguish between these losses and the unfairness caused by both exposed nodes and channel capture. Our estimator provides quantitative measures of the different causes of lost transmission opportunities, requiring only local measures at the 802.11 transmitter and no modification to the 802.11 protocol or in other stations. Our approach is suited to implementation on commodity hardware, and we demonstrate our prototype implementation via experimental assessments. We finally show how our estimator can help the WLAN station to improve its local performance.",
"Link adaptation to dynamically select the data transmission rate at a given time has been recognized as an effective way to improve the goodput performance of the IEEE 802.11 wireless local-area networks (WLANs). Recently, with the introduction of the new high-speed 802.11a physical layer (PHY), it is even more important to have a well-designed link adaptation scheme work with the 802.11a PHY such that its multiple transmission rates can be exploited. In this paper, we first present a generic method to analyze the goodput performance of an 802.11a system under the distributed coordination function (DCF) and express the expected effective goodput as a closed-form function of the data payload length, the frame retry count, the wireless channel condition, and the selected data transmission rate. Then, based on the theoretical analysis, we propose a novel MPDU (MAC protocol data unit)-based link adaptation scheme for the 802.11a systems. It is a simple table-driven approach and the basic idea is to preestablish a best PHY mode table by applying the dynamic programming technique. The best PHY mode table is indexed by the system status triplet that consists of the data payload length, the wireless channel condition, and the frame retry count. At runtime, a wireless station determines the most appropriate PHY mode for the next transmission attempt by a simple table lookup, using the most up-to-date system status as the index. Our in-depth simulation shows that the proposed MPDU-based link adaptation scheme outperforms the single-mode schemes and the autorate fallback (ARF) scheme-which is used in Lucent Technologies' WaveLAN-II networking devices-significantly in terms of the average goodput, the frame drop rate, and the average number of transmission attempts per data frame delivery.",
"",
"Streaming multimedia content in real-time over a wireless link is a challenging task because of the rapid fluctuations in link conditions that can occur due to movement, interference, and so on. The popular IEEE 802.11 standard includes low-level tuning parameters like the transmission rate. Standard device drivers for today's wireless products are based on gathering statistics, and consequently, adapt rather slowly to changes in conditions. To meet the strict latency requirements of streaming applications, we designed and implemented an advanced control algorithm that uses signal-strength (SNR) information to achieve fast responses. Since SNR readings are quite noisy we do not use that information to directly control the rate setting, but rather as a safeguard limiting the range of feasible settings to choose from. We report on real-time experiments involving two laptops equipped with IEEE 802.11a wireless interface cards. The results show that using SNR information greatly enhances responsiveness in comparison to statistics-based rate controllers.",
"Wireless local area networks (W-LANs) have become increasingly popular due to the recent availability of affordable devices that are capable of communicating at high data rates. These high rates are possible, in part, through new modulation schemes that are optimized for the channel conditions bringing about a dramatic increase in bandwidth efficiency. Since the choice of which modulation scheme to use depends on the current state of the transmission channel, newer wireless devices often support multiple modulation schemes, and hence multiple datarates, with mechanisms to switch between them Users are given the option to either select an operational datarate manually or to let the device automatically choose the appropriate modulation scheme (data rate) to match the prevailing conditions. Automatic rate selection protocols have been studied for cellular networks but there have been relatively few proposals for W-LANs. In this paper we present a rate adaptive MAC protocol called the Receiver-Based AutoRate (RBAR) protocol. The novelty of RBAR is that its rate adaptation mechanism is in the receiver instead of in the sender. This is in contrast to existing schemes in devices like the WaveLAN II [15]. We show that RBAR is better because it results in a more efficient channel quality estimation which is then reflected in a higher overall throughput Our protocol is based on the RTS CTS mechanism and consequently it can be incorporated into many medium access control protocols including the widely popular IEEE 802.11 protocol. Simulation results of an implementation of RBAR inside IEEE 802.11 show that RBAR performs consistently well."
]
}
|
1107.5468
|
2952299844
|
Wireless 802.11 links operate in unlicensed spectrum and so must accommodate other unlicensed transmitters which generate pulsed interference. We propose a new approach for detecting the presence of pulsed interference affecting 802.11 links, and for estimating temporal statistics of this interference. This approach builds on recent work on distinguishing collision losses from noise losses in 802.11 links. When the intervals between interference pulses are i.i.d., the approach is not confined to estimating the mean and variance of these intervals but can recover the complete probability distribution. The approach is a transmitter-side technique that provides per-link information and is compatible with standard hardware. We demonstrate the effectiveness of the proposed approach using extensive experimental measurements. In addition to applications to monitoring, management and diagnostics, the fundamental information provided by our approach can potentially be used to adapt the frame durations used in a network so as to increase capacity in the presence of pulsed interference.
|
With regard to combined MAC PHY approaches, the present paper builds upon the packet pair approach proposed in @cite_3 @cite_23 for estimating the frame error rates due to collisions, noise and hidden terminals. See also the closely related work in @cite_35 . @cite_3 @cite_23 @cite_35 focus on time-invariant channels and do not consider estimation of temporal statistics. @cite_10 considers a similar problem to @cite_3 , but uses channel busy idle time information.
|
{
"cite_N": [
"@cite_35",
"@cite_23",
"@cite_3",
"@cite_10"
],
"mid": [
"2106500630",
"1494686460",
"2147189535",
"2169146840"
],
"abstract": [
"Current rate control (selection) algorithms in IEEE 802.11 are not based on accurate measurements of packet errors caused at the physical layer. Instead, algorithms act on measurements which, either implicitly or explicitly, mix physical errors with those arising from contention. In this paper we first illustrate how contention can adversely affect the performance of these algorithms, and point out the potential benefits of an ability to isolate physical packet error rate. We introduce and compare two variants of a single core idea enabling the isolation and accurate measurement of physical packet error, based on exploiting existing features of the MAC standard in a novel way. One is based on the RTS CTS mechanism, and the other on packet fragmentation. Using proof of concept experimental results from a wireless testbed, we show these mechanisms can be used to improve the performance of two existing algorithms, SampleRate and AMRR, both for individual stations and for the system as a whole, and show how incremental deployment is unproblematic. We discuss how the methodology can be integrated in a modular way into rate control algorithms with acceptable overhead. Index Terms—IEEE 802.11, rate selection, packet error rate, contention",
"In this paper we present the first field measurements taken using a new approach proposed in [1] for measuring link impairments in 802.11 WLANs. This uses a sender-side MAC PHY cross-layer technique that can be implemented on standard hardware and is able to explicitly classify lost transmission opportunities into noise-related losses, collision induced losses, hidden-node losses and to distinguish among these different types of impairments on a per-link basis. We show that potential benefits arising from the availability of accurate and reliable data are considerable.",
"We propose a powerful MAC PHY cross-layer approach to measuring IEEE 802.11 transmission opportunities in WLAN networks on a per-link basis. Our estimator can operate at a single station and it is able to: 1) classify losses caused by noise, collisions, and hidden nodes; and 2) distinguish between these losses and the unfairness caused by both exposed nodes and channel capture. Our estimator provides quantitative measures of the different causes of lost transmission opportunities, requiring only local measures at the 802.11 transmitter and no modification to the 802.11 protocol or in other stations. Our approach is suited to implementation on commodity hardware, and we demonstrate our prototype implementation via experimental assessments. We finally show how our estimator can help the WLAN station to improve its local performance.",
"Current 802.11 networks do not typically achieve the maximum potential throughput despite link adaptation and cross-layer optimization techniques designed to alleviate many causes of packet loss. A primary contributing factor is the difficulty in distinguishing between various causes of packet loss, including collisions caused by high network use, co-channel interference from neighboring networks, and errors due to poor channel conditions. In this paper, we propose a novel method for estimating various collision type probabilities locally at a given node of an 802.11 network. Our approach is based on combining locally observable quantities with information observed and broadcast by the access point (AP) in order to obtain partial spatial information about the network traffic. We provide a systematic assessment and definition of the different types of collision, and show how to approximate each of them using only local and AP information. Additionally, we show how to approximate the sensitivity of these probabilities to key related configuration parameters including carrier sense threshold and packet length. We verify our methods through NS-2 simulations, and characterize estimation accuracy of each of the considered collision types."
]
}
|
1107.4557
|
2949957935
|
Consumers increasingly rate, review and research products online. Consequently, websites containing consumer reviews are becoming targets of opinion spam. While recent work has focused primarily on manually identifiable instances of opinion spam, in this work we study deceptive opinion spam---fictitious opinions that have been deliberately written to sound authentic. Integrating work from psychology and computational linguistics, we develop and compare three approaches to detecting deceptive opinion spam, and ultimately develop a classifier that is nearly 90 accurate on our gold-standard opinion spam dataset. Based on feature analysis of our learned models, we additionally make several theoretical contributions, including revealing a relationship between deceptive opinions and imaginative writing.
|
Spam has historically been studied in the contexts of e-mail @cite_15 , and the Web @cite_5 @cite_20 . Recently, researchers have began to look at opinion spam as well @cite_33 @cite_17 @cite_34 .
|
{
"cite_N": [
"@cite_33",
"@cite_5",
"@cite_15",
"@cite_34",
"@cite_20",
"@cite_17"
],
"mid": [
"2047756776",
"1845137714",
"2169384781",
"2137974413",
"2066055909",
""
],
"abstract": [
"Evaluative texts on the Web have become a valuable source of opinions on products, services, events, individuals, etc. Recently, many researchers have studied such opinion sources as product reviews, forum posts, and blogs. However, existing research has been focused on classification and summarization of opinions using natural language processing and data mining techniques. An important issue that has been neglected so far is opinion spam or trustworthiness of online opinions. In this paper, we study this issue in the context of product reviews, which are opinion rich and are widely used by consumers and product manufacturers. In the past two years, several startup companies also appeared which aggregate opinions from product reviews. It is thus high time to study spam in reviews. To the best of our knowledge, there is still no published study on this topic, although Web spam and email spam have been investigated extensively. We will see that opinion spam is quite different from Web spam and email spam, and thus requires different detection techniques. Based on the analysis of 5.8 million reviews and 2.14 million reviewers from amazon.com, we show that opinion spam in reviews is widespread. This paper analyzes such spam activities and presents some novel techniques to detect them",
"Web spam pages use various techniques to achieve higher-than-deserved rankings in a search engine's results. While human experts can identify spam, it is too expensive to manually evaluate a large number of pages. Instead, we propose techniques to semi-automatically separate reputable, good pages from spam. We first select a small set of seed pages to be evaluated by an expert. Once we manually identify the reputable seed pages, we use the link structure of the web to discover other pages that are likely to be good. In this paper we discuss possible ways to implement the seed selection and the discovery of good pages. We present results of experiments run on the World Wide Web indexed by AltaVista and evaluate the performance of our techniques. Our results show that we can effectively filter out spam from a significant fraction of the web, based on a good seed set of less than 200 sites.",
"We study the use of support vector machines (SVM) in classifying e-mail as spam or nonspam by comparing it to three other classification algorithms: Ripper, Rocchio, and boosting decision trees. These four algorithms were tested on two different data sets: one data set where the number of features were constrained to the 1000 best features and another data set where the dimensionality was over 7000. SVM performed best when using binary features. For both data sets, boosting trees and SVM had acceptable test performance in terms of accuracy and speed. However, SVM had significantly less training time.",
"As the use of online reviews grows, so does the risk of providers trying to influence review postings through the submission of false reviews. It is difficult for users of online review platforms to detect deception as important cues are missing in online environments. Automatic screening technologies promise a reduction in the risk but need to be informed by research as to how to classify reviews as suspicious. Using findings from deception theory, a study was conducted to compare the language structure of deceptive and truthful hotel reviews. The results show that deceptive and truthful reviews are different in terms of lexical complexity, the use of first person pronouns, the inclusion of brand names, and their sentiment. However, the results suggest that it might be difficult to distinguish between deceptive and truthful reviews based on structural properties.",
"In this paper, we continue our investigations of \"web spam\": the injection of artificially-created pages into the web in order to influence the results from search engines, to drive traffic to certain pages for fun or profit. This paper considers some previously-undescribed techniques for automatically detecting spam pages, examines the effectiveness of these techniques in isolation and when aggregated using classification algorithms. When combined, our heuristics correctly identify 2,037 (86.2 ) of the 2,364 spam pages (13.8 ) in our judged collection of 17,168 pages, while misidentifying 526 spam and non-spam pages (3.1 ).",
""
]
}
|
1107.4557
|
2949957935
|
Consumers increasingly rate, review and research products online. Consequently, websites containing consumer reviews are becoming targets of opinion spam. While recent work has focused primarily on manually identifiable instances of opinion spam, in this work we study deceptive opinion spam---fictitious opinions that have been deliberately written to sound authentic. Integrating work from psychology and computational linguistics, we develop and compare three approaches to detecting deceptive opinion spam, and ultimately develop a classifier that is nearly 90 accurate on our gold-standard opinion spam dataset. Based on feature analysis of our learned models, we additionally make several theoretical contributions, including revealing a relationship between deceptive opinions and imaginative writing.
|
Research has also been conducted on the related task of psycholinguistic deception detection . , and later Mihalcea and Strapparava , ask participants to give both their true and untrue views on personal issues (e.g., their stance on the death penalty). consider computer-mediated deception in role-playing games designed to be played over instant messaging and e-mail. However, while these studies compare @math -gram--based deception classifiers to a random guess baseline of 50 Lastly, automatic approaches to determining review quality have been studied---directly @cite_36 , and in the contexts of helpfulness @cite_6 @cite_37 @cite_35 and credibility @cite_14 . Unfortunately, most measures of quality employed in those works are based exclusively on human judgments, which we find in to be poorly calibrated to detecting deceptive opinion spam.
|
{
"cite_N": [
"@cite_35",
"@cite_37",
"@cite_14",
"@cite_36",
"@cite_6"
],
"mid": [
"2115613989",
"2087294982",
"",
"2123622235",
"2950622308"
],
"abstract": [
"User-generated reviews are a common and valuable source of product information, yet little attention has been paid as to how best to present them to end-users. In this paper, we describe a classification-based recommender system that is designed to recommend the most helpful reviews for a given product. We present a large-scale evaluation of our approach using TripAdvisor hotel reviews, and we show that our approach is capable of suggesting superior reviews compared to a number of alternative recommendation benchmarks.",
"User-supplied reviews are widely and increasingly used to enhance e-commerce and other websites. Because reviews can be numerous and varying in quality, it is important to assess how helpful each review is. While review helpfulness is currently assessed manually, in this paper we consider the task of automatically assessing it. Experiments using SVM regression on a variety of features over Amazon.com product reviews show promising results, with rank correlations of up to 0.66. We found that the most useful features include the length of the review, its unigrams, and its product rating.",
"",
"Assessing the quality of user generated content is an important problem for many web forums. While quality is currently assessed manually, we propose an algorithm to assess the quality of forum posts automatically and test it on data provided by Nabble.com. We use state-of-the-art classification techniques and experiment with five feature classes: Surface, Lexical, Syntactic, Forum specific and Similarity features. We achieve an accuracy of 89 on the task of automatically assessing post quality in the software domain using forum specific features. Without forum specific features, we achieve an accuracy of 82 .",
"There are many on-line settings in which users publicly express opinions. A number of these offer mechanisms for other users to evaluate these opinions; a canonical example is Amazon.com, where reviews come with annotations like \"26 of 32 people found the following review helpful.\" Opinion evaluation appears in many off-line settings as well, including market research and political campaigns. Reasoning about the evaluation of an opinion is fundamentally different from reasoning about the opinion itself: rather than asking, \"What did Y think of X?\", we are asking, \"What did Z think of Y's opinion of X?\" Here we develop a framework for analyzing and modeling opinion evaluation, using a large-scale collection of Amazon book reviews as a dataset. We find that the perceived helpfulness of a review depends not just on its content but also but also in subtle ways on how the expressed evaluation relates to other evaluations of the same product. As part of our approach, we develop novel methods that take advantage of the phenomenon of review \"plagiarism\" to control for the effects of text in opinion evaluation, and we provide a simple and natural mathematical model consistent with our findings. Our analysis also allows us to distinguish among the predictions of competing theories from sociology and social psychology, and to discover unexpected differences in the collective opinion-evaluation behavior of user populations from different countries."
]
}
|
1107.5290
|
2949973360
|
We consider the problem of approximating the solution of variational problems subject to the constraint that the admissible functions must be convex. This problem is at the interface between convex analysis, convex optimization, variational problems, and partial differential equation techniques. The approach is to approximate the (non-polyhedral) cone of convex functions by a polyhedral cone which can be represented by linear inequalities. This approach leads to an optimization problem with linear constraints which can be computed efficiently, hundreds of times faster than existing methods.
|
The earliest application of variational problems with convexity constraints is Newton's problem of finding a body of minimal resistance @cite_9 . There the convexity constraint arises as a natural assumption on the shape of the body; see @cite_9 for a discussion and also see @cite_8 . A modern application is to mathematical economics @cite_10 @cite_16 . These problems can often be recast as the projection of a function (in the @math or @math norm) onto the set on convex functions defined on the bounded domain @math . Geometric applications include Alexandroff's problem and Cheeger's problem; see @cite_5 for references. For a discussion of applications to economics and history of this problem, we refer to @cite_14 .
|
{
"cite_N": [
"@cite_14",
"@cite_8",
"@cite_9",
"@cite_5",
"@cite_16",
"@cite_10"
],
"mid": [
"",
"1993548995",
"1965171516",
"1994641279",
"2056647974",
""
],
"abstract": [
"",
"We investigate the minimization of Newton's functional for the problem of the body of minimal resistance with maximal height @math cite butt in the class of convex developable functions defined in a disc. This class is a natural candidate to find a (non-radial) minimizer in accordance with the results of cite lrp2. We prove that the minimizer in this class has a minimal set in the form of a regular polygon with @math sides centered in the disc, and numerical experiments indicate that the natural number @math is a non-decreasing function of @math . The corresponding functions all achieve a lower value of the functional than the optimal radially symmetric function with the same height @math .",
"In 1685, Sir Isaac Newton studied the motion of bodies through an inviscid and incompressible medium. In his words (from his Principia Mathematica): If in a rare medium, consisting of equal particles freely disposed at equal distances from each other, a globe and a cylinder described on equal diameter move with equal velocities in the direction of the axis of the cylinder, (then) the resistance of the globe will be half as great as that of the cylinder.... I reckon that this proposition will be not without application in the building of ships.",
"We present numerical methods to solve optimization problems on the space of convex functions or among convex bodies. Hence convexity is a constraint on the admissible objects, whereas the functionals are not required to be convex. To deal with this, our method mixes geometrical and numerical algorithms. We give several applications arising from classical problems in geometry and analysis: Alexandrov's problem of finding a convex body of prescribed surface function; Cheeger's problem of a subdomain minimizing the ratio surface area on volume; Newton's problem of the body of minimal resistance. In particular for the latter application, the minimizers are still unknown, except in some particular classes. We give approximate solutions better than the theoretical known ones, hence demonstrating that the minimizers do not belong to these classes.",
"The seller of N distinct objects is uncertain about the buyer’s valuation for those objects. The seller’s problem, to maximize expected revenue, consists of maximizing a linear functional over a convex set of mechanisms. A solution to the seller’s problem can always be found in an extreme point of the feasible set. We identify the relevant extreme points and faces of the feasible set. With N = 1, the extreme points are easily described providing simple proofs of well-known results. The revenue-maximizing mechanism assigns the object with probability one or zero depending on the buyer’s report. With N > 1, extreme points often involve randomization in the assignment of goods. Virtually any extreme point of the feasible set maximizes revenue for a well-behaved distribution of buyer’s valuations. We provide a simple algebraic procedure to determine whether a mechanism is an extreme point.",
""
]
}
|
1107.5290
|
2949973360
|
We consider the problem of approximating the solution of variational problems subject to the constraint that the admissible functions must be convex. This problem is at the interface between convex analysis, convex optimization, variational problems, and partial differential equation techniques. The approach is to approximate the (non-polyhedral) cone of convex functions by a polyhedral cone which can be represented by linear inequalities. This approach leads to an optimization problem with linear constraints which can be computed efficiently, hundreds of times faster than existing methods.
|
There have been a few different numerical approaches to this problem, which rely on adapting PDE techniques to the problem at hand. Early work @cite_4 using PDE-type methods, did not make assertions about the convexity (or approximate convexity) of the resulting solutions. Later work by @cite_0 and @cite_5 identified some of the difficulties in working with convex functions. These difficulties suggest that a straightforward adaptation of standard numerical methods is not possible. The introduction of a large number (superlinear in the number of variables) of global constraints was required in order to ensure discrete convexity.
|
{
"cite_N": [
"@cite_0",
"@cite_5",
"@cite_4"
],
"mid": [
"2047510318",
"1994641279",
"2006026366"
],
"abstract": [
"We describe an algorithm to approximate the minimizer of an elliptic functional in the form ( j(x, u, u) ) on the set ( C ) of convex functions u in an appropriate functional space X. Such problems arise for instance in mathematical economics [4]. A special case gives the convex envelope (u_0^ ** ) of a given function (u_0 ). Let ((T_n) ) be any quasiuniform sequence of meshes whose diameter goes to zero, and (I_n ) the corresponding affine interpolation operators. We prove that the minimizer over ( C ) is the limit of the sequence ((u_n) ), where (u_n ) minimizes the functional over (I_n( C ) ). We give an implementable characterization of (I_n( C ) ). Then the finite dimensional problem turns out to be a minimization problem with linear constraints.",
"We present numerical methods to solve optimization problems on the space of convex functions or among convex bodies. Hence convexity is a constraint on the admissible objects, whereas the functionals are not required to be convex. To deal with this, our method mixes geometrical and numerical algorithms. We give several applications arising from classical problems in geometry and analysis: Alexandrov's problem of finding a convex body of prescribed surface function; Cheeger's problem of a subdomain minimizing the ratio surface area on volume; Newton's problem of the body of minimal resistance. In particular for the latter application, the minimizers are still unknown, except in some particular classes. We give approximate solutions better than the theoretical known ones, hence demonstrating that the minimizers do not belong to these classes.",
"The goal of this paper is to introduce the approximated convex envelope of a function and to estimate how it differs from its convex envelope. Such a problem arises in various physical situations where the function considered is some energy that has to be minimized.This study is a first step toward understanding how to approximate the quasi-convex envelope of a function. The importance of this issue is due to the various applications that are encountered, in particular, in the field of material science."
]
}
|
1107.5290
|
2949973360
|
We consider the problem of approximating the solution of variational problems subject to the constraint that the admissible functions must be convex. This problem is at the interface between convex analysis, convex optimization, variational problems, and partial differential equation techniques. The approach is to approximate the (non-polyhedral) cone of convex functions by a polyhedral cone which can be represented by linear inequalities. This approach leads to an optimization problem with linear constraints which can be computed efficiently, hundreds of times faster than existing methods.
|
A recent work on the problem is @cite_1 (see also @cite_15 ). In these works, approximate convexity is sought by enforcing positive definiteness of the discrete Hessian. However, as the authors of this work explain (see also ), the fact that a discrete Hessian is non-negative definite does not ensure that the corresponding points can be interpolated to a convex function. The resulting optimization problem is a conic problem, which is generically more difficult to solve than one with linear constraints. However the number of constraints is less, on the order of the number of variables, in contrast to @cite_0 , in which the number of (linear) constraints grew superlinearly in the number of variables.
|
{
"cite_N": [
"@cite_0",
"@cite_15",
"@cite_1"
],
"mid": [
"2047510318",
"",
"2016361304"
],
"abstract": [
"We describe an algorithm to approximate the minimizer of an elliptic functional in the form ( j(x, u, u) ) on the set ( C ) of convex functions u in an appropriate functional space X. Such problems arise for instance in mathematical economics [4]. A special case gives the convex envelope (u_0^ ** ) of a given function (u_0 ). Let ((T_n) ) be any quasiuniform sequence of meshes whose diameter goes to zero, and (I_n ) the corresponding affine interpolation operators. We prove that the minimizer over ( C ) is the limit of the sequence ((u_n) ), where (u_n ) minimizes the functional over (I_n( C ) ). We give an implementable characterization of (I_n( C ) ). Then the finite dimensional problem turns out to be a minimization problem with linear constraints.",
"",
"Many problems of theoretical and practical interest involve finding an optimum over a family of convex functions. For instance, finding the projection on the convex functions in Hk(Ω), and optimizing functionals arising from some problems in economics. In the continuous setting and assuming smoothness, the convexity constraints may be given locally by asking the Hessian matrix to be positive semidefinite, but in making discrete approximations two difficulties arise: the continuous solutions may be not smooth, and functions with positive semidefinite discrete Hessian need not be convex in a discrete sense. Previous work has concentrated on non-local descriptions of convexity, making the number of constraints to grow super-linearly with the number of nodes even in dimension 2, and these descriptions are very difficult to extend to higher dimensions. In this paper we propose a finite difference approximation using positive semidefinite programs and discrete Hessians, and prove convergence under very general conditions, even when the continuous solution is not smooth, working on any dimension, and requiring a linear number of constraints in the number of nodes. Using semidefinite programming codes, we show concrete examples of approximations to problems in two and three dimensions."
]
}
|
1107.4667
|
2952367229
|
This paper addresses the problem of correlation estimation in sets of compressed images. We consider a framework where images are represented under the form of linear measurements due to low complexity sensing or security requirements. We assume that the images are correlated through the displacement of visual objects due to motion or viewpoint change and the correlation is effectively represented by optical flow or motion field models. The correlation is estimated in the compressed domain by jointly processing the linear measurements. We first show that the correlated images can be efficiently related using a linear operator. Using this linear relationship we then describe the dependencies between images in the compressed domain. We further cast a regularized optimization problem where the correlation is estimated in order to satisfy both data consistency and motion smoothness objectives with a Graph Cut algorithm. We analyze in detail the correlation estimation performance and quantify the penalty due to image compression. Extensive experiments in stereo and video imaging applications show that our novel solution stays competitive with methods that implement complex image reconstruction steps prior to correlation estimation. We finally use the estimated correlation in a novel joint image reconstruction scheme that is based on an optimization problem with sparsity priors on the reconstructed images. Additional experiments show that our correlation estimation algorithm leads to an effective reconstruction of pairs of images in distributed image coding schemes that outperform independent reconstruction algorithms by 2 to 4 dB.
|
In recent years, signal acquisition based on random projections received a significant attention in many applications like medical imaging, compressive imaging and even sensor networks. Donoho @cite_16 and Candes @cite_31 show that the small number of linear measurements contain enough information to reconstruct a sparse or a compressible signal. In particular they show that if a signal has a sparse representation in one basis then it can be recovered from a small number of linear measurements taken on another (random) basis that is incoherent with the first one. Essentially, if the signal is @math -sparse (i.e., if the signal contains @math significant components), then one need approximately @math linear measurements (typically @math = 3 or 4) to reconstruct the signal with high probability @cite_10 . Such results open the door to novel low complexity sensing solutions where the computational complexity for signal reconstruction or analysis is pushed to the decoder. These ideas have been applied to image acquisition @cite_11 @cite_17 @cite_18 and later extended to video sequences @cite_29 @cite_33 @cite_8 @cite_20 . The effect of measurement quantization and the lossy compression of linear measurements has been studied in @cite_40 .
|
{
"cite_N": [
"@cite_18",
"@cite_11",
"@cite_33",
"@cite_8",
"@cite_29",
"@cite_40",
"@cite_31",
"@cite_16",
"@cite_10",
"@cite_20",
"@cite_17"
],
"mid": [
"",
"2122548617",
"1966832930",
"",
"2198925517",
"2118016489",
"2145096794",
"",
"67772112",
"",
""
],
"abstract": [
"",
"In this article, the authors present a new approach to building simpler, smaller, and cheaper digital cameras that can operate efficiently across a broader spectral range than conventional silicon-based cameras. The approach fuses a new camera architecture based on a digital micromirror device with the new mathematical theory and algorithms of compressive sampling.",
"Compressive Sensing (CS) allows the highly efficient acquisition of many signals that could be difficult to capture or encode using conventional methods. From a relatively small number of random measurements, a high-dimensional signal can be recovered if it has a sparse or near-sparse representation in a basis known to the decoder. In this paper, we consider the application of CS to video signals in order to lessen the sensing and compression burdens in single- and multi-camera imaging systems. In standard video compression, motion compensation and estimation techniques have led to improved sparse representations that are more easily compressible; we adapt these techniques for the problem of CS recovery. Using a coarse-to-fine reconstruction algorithm, we alternate between the tasks of motion estimation and motion-compensated wavelet-domain signal recovery. We demonstrate that our algorithm allows the recovery of video sequences from fewer measurements than either frame-by-frame or inter-frame difference recovery methods.",
"",
"",
"Recent results in compressive sampling have shown that sparse signals can be recovered from a small number of random measurements. This property raises the question of whether random measurements can provide an efficient representation of sparse signals in an information-theoretic sense. Through both theoretical and experimental results, we show that encoding a sparse signal through simple scalar quantization of random measurements incurs a significant penalty relative to direct or adaptive encoding of the sparse signal. Information theory provides alternative quantization strategies, but they come at the cost of much greater estimation complexity.",
"This paper considers the model problem of reconstructing an object from incomplete frequency samples. Consider a discrete-time signal f spl isin C sup N and a randomly chosen set of frequencies spl Omega . Is it possible to reconstruct f from the partial knowledge of its Fourier coefficients on the set spl Omega ? A typical result of this paper is as follows. Suppose that f is a superposition of |T| spikes f(t)= spl sigma sub spl tau spl isin T f( spl tau ) spl delta (t- spl tau ) obeying |T| spl les C sub M spl middot (log N) sup -1 spl middot | spl Omega | for some constant C sub M >0. We do not know the locations of the spikes nor their amplitudes. Then with probability at least 1-O(N sup -M ), f can be reconstructed exactly as the solution to the spl lscr sub 1 minimization problem. In short, exact recovery may be obtained by solving a convex optimization problem. We give numerical values for C sub M which depend on the desired probability of success. Our result may be interpreted as a novel kind of nonlinear sampling theorem. In effect, it says that any signal made out of |T| spikes may be recovered by convex programming from almost every set of frequencies of size O(|T| spl middot logN). Moreover, this is nearly optimal in the sense that any method succeeding with probability 1-O(N sup -M ) would in general require a number of frequency samples at least proportional to |T| spl middot logN. The methodology extends to a variety of other situations and higher dimensions. For example, we show how one can reconstruct a piecewise constant (one- or two-dimensional) object from incomplete frequency samples - provided that the number of jumps (discontinuities) obeys the condition above - by minimizing other convex functionals such as the total variation of f.",
"",
"Can we recover a signal f 2 R N from a small number of linear measurements? A series of recent papers developed a collection of results showing that it is surprisingly possible to reconstruct certain types of signals accurately from limited measurements. In a nutshell, suppose that f is compressible in the sense that it is well-approximated by a linear combination of M vectors taken from a known basis . Then not knowing anything in advance about the signal, f can (very nearly) be recovered from about M logN generic nonadaptive measurements only. The recovery procedure is concrete and consists in solving a simple convex optimization program. In this paper, we show that these ideas are of practical significance. Inspired by theoretical developments, we propose a series of practical recovery procedures and test them on a series of signals and images which are known to be well approximated in wavelet bases. We demonstrate empirically that it is possible to recover an object from about 3M ‐5M projections onto generically chosen vectors with the same accuracy as the ideal M -term wavelet approximation. We briefly discuss possible implications in the areas of data compression and medical imaging.",
"",
""
]
}
|
1107.4588
|
2952251845
|
We present a study of the group purchasing behavior of daily deals in Groupon and LivingSocial and introduce a predictive dynamic model of collective attention for group buying behavior. In our model, the aggregate number of purchases at a given time comprises two types of processes: random discovery and social propagation. We find that these processes are very clearly separated by an inflection point. Using large data sets from both Groupon and LivingSocial we show how the model is able to predict the success of group deals as a function of time. We find that Groupon deals are easier to predict accurately earlier in the deal lifecycle than LivingSocial deals due to the final number of deal purchases saturating quicker. One possible explanation for this is that the incentive to socially propagate a deal is based on an individual threshold in LivingSocial, whereas in Groupon it is based on a collective threshold, which is reached very early. Furthermore, the personal benefit of propagating a deal is also greater in LivingSocial.
|
According to @cite_6 @cite_4 , a buyer's social network strongly influences her purchasing behavior. In @cite_4 , Guo et. al. analyze data from the e-commerce site Taobao Taobao is a Chinese Consumer Market place, and also the world's largest e-commerce website, http: www.taobao.com. to understand how individuals' commercial transactions are embedded in their social graphs. In the study, they show that implicit information passing exists in the Taobao network, and that communication between buyers drives purchases. However, according to the study presented in @cite_10 social factors may impose a different level of impact on the user purchase behavior for different e-commerce products.
|
{
"cite_N": [
"@cite_10",
"@cite_4",
"@cite_6"
],
"mid": [
"2105535951",
"2109816759",
"1975569759"
],
"abstract": [
"We present an analysis of a person-to-person recommendation network, consisting of 4 million people who made 16 million recommendations on half a million products. We observe the propagation of recommendations and the cascade sizes, which we explain by a simple stochastic model. We then establish how the recommendation network grows over time and how effective it is from the viewpoint of the sender and receiver of the recommendations. While on average recommendations are not very effective at inducing purchases and do not spread very far, we present a model that successfully identifies product and pricing categories for which viral marketing seems to be very effective.",
"While social interactions are critical to understanding consumer behavior, the relationship between social and commerce networks has not been explored on a large scale. We analyze Taobao, a Chinese consumer marketplace that is the world's largest e-commerce website. What sets Taobao apart from its competitors is its integrated instant messaging tool, which buyers can use to ask sellers about products or ask other buyers for advice. In our study, we focus on how an individual's commercial transactions are embedded in their social graphs. By studying triads and the directed closure process, we quantify the presence of information passing and gain insights into when different types of links form in the network. Using seller ratings and review information, we then quantify a price of trust. How much will a consumer pay for transaction with a trusted seller? We conclude by modeling this consumer choice problem: if a buyer wishes to purchase a particular product, how does (s)he decide which store to purchase it from? By analyzing the performance of various feature sets in an information retrieval setting, we demonstrate how the social graph factors into understanding consumer behavior.",
"Why and to what extent do people make significant purchases from people with whom they have prior noncommercial relationships ? Using data from the economic sociology module of the 1996 General Social Survey, the authors document high levels of within-network exchanges. They argue that transacting with social contacts is effective because it embeds commercial exchanges in a web of obligations and holds the seller's network hostage to appropriate role performance in the economic transaction. It follows that within-network exchanges will be more common in risky transactions that are unlikely to be repeated and in which uncertainty is high. The data support this view. Self-reports about major purchases are consistent with the expectation that exchange frequency reduces the extent of within-network exchanges. Responses to questions about preferences for in-group exchanges support the argument that uncertainty about product and performance quality leads people to prefer sellers with whom they have noncommercial ties. Moreover, people prefer to avoid selling to social contacts under the same conditions that lead buyers to seek such transactions; and people who transact with friends and relatives report greater satisfaction with the results than do people who transact with strangers, especially for risk-laden exchanges"
]
}
|
1107.4588
|
2952251845
|
We present a study of the group purchasing behavior of daily deals in Groupon and LivingSocial and introduce a predictive dynamic model of collective attention for group buying behavior. In our model, the aggregate number of purchases at a given time comprises two types of processes: random discovery and social propagation. We find that these processes are very clearly separated by an inflection point. Using large data sets from both Groupon and LivingSocial we show how the model is able to predict the success of group deals as a function of time. We find that Groupon deals are easier to predict accurately earlier in the deal lifecycle than LivingSocial deals due to the final number of deal purchases saturating quicker. One possible explanation for this is that the incentive to socially propagate a deal is based on an individual threshold in LivingSocial, whereas in Groupon it is based on a collective threshold, which is reached very early. Furthermore, the personal benefit of propagating a deal is also greater in LivingSocial.
|
Recent studies of collective attention on social media sites such as Twitter, Digg and YouTube @cite_12 @cite_9 @cite_5 have clarified the interplay between popularity and novelty of user generated content. The allocation of attention across items was found to be universally log-normal, as a result of a multiplicative process that can be explained by an information propagation mechanism inherent in all these sites. While the specific time scales over which novelty decays differ between different systems depending on their typical type of content, the functional form of the decay is consistent and thus future popularity is predictable.
|
{
"cite_N": [
"@cite_5",
"@cite_9",
"@cite_12"
],
"mid": [
"2117192789",
"2070366435",
"2058465497"
],
"abstract": [
"Social media generates a prodigious wealth of real-time content at an incessant rate. From all the content that people create and share, only a few topics manage to attract enough attention to rise to the top and become temporal trends which are displayed to users. The question of what factors cause the formation and persistence of trends is an important one that has not been answered yet. In this paper, we conduct an intensive study of trending topics on Twitter and provide a theoretical basis for the formation, persistence and decay of trends. We also demonstrate empirically how factors such as user activity and number of followers do not contribute strongly to trend creation and its propagation. In fact, we find that the resonance of the content with the users of the social network plays a major role in causing trends.",
"We present a method for accurately predicting the long time popularity of online content from early measurements of user's access. Using two content sharing portals, Youtube and Digg, we show that by modeling the accrual of views and votes on content offered by these services we can predict the long-term dynamics of individual submissions from initial data. In the case of Digg, measuring access to given stories during the first two hours allows us to forecast their popularity 30 days ahead with remarkable accuracy, while downloads of Youtube videos need to be followed for 10 days to attain the same performance. The differing time scales of the predictions are shown to be due to differences in how content is consumed on the two portals: Digg stories quickly become outdated, while Youtube videos are still found long after they are initially submitted to the portal. We show that predictions are more accurate for submissions for which attention decays quickly, whereas predictions for evergreen content will be prone to larger errors.",
"The subject of collective attention is central to an information age where millions of people are inundated with daily messages. It is thus of interest to understand how attention to novel items propagates and eventually fades among large populations. We have analyzed the dynamics of collective attention among 1 million users of an interactive web site, digg.com, devoted to thousands of novel news stories. The observations can be described by a dynamical model characterized by a single novelty factor. Our measurements indicate that novelty within groups decays with a stretched-exponential law, suggesting the existence of a natural time scale over which attention fades."
]
}
|
1107.4573
|
2953008162
|
It has been argued that analogy is the core of cognition. In AI research, algorithms for analogy are often limited by the need for hand-coded high-level representations as input. An alternative approach is to use high-level perception, in which high-level representations are automatically generated from raw data. Analogy perception is the process of recognizing analogies using high-level perception. We present PairClass, an algorithm for analogy perception that recognizes lexical proportional analogies using representations that are automatically generated from a large corpus of raw textual data. A proportional analogy is an analogy of the form A:B::C:D, meaning "A is to B as C is to D". A lexical proportional analogy is a proportional analogy with words, such as carpenter:wood::mason:stone. PairClass represents the semantic relations between two words using a high-dimensional feature vector, in which the elements are based on frequencies of patterns in the corpus. PairClass recognizes analogies by applying standard supervised machine learning techniques to the feature vectors. We show how seven different tests of word comprehension can be framed as problems of analogy perception and we then apply PairClass to the seven resulting sets of analogy perception problems. We achieve competitive results on all seven tests. This is the first time a uniform approach has handled such a range of tests of word comprehension.
|
One of the tasks in SemEval 2007 was the classification of semantic relations between nominals @cite_2 . SemEval 2007 was the Fourth International Workshop on Semantic Evaluations. More information on Task 4, the classification of semantic relations between nominals, is available at http: purl.org net semeval -task4. The problem is to classify semantic relations between nominals (nouns and noun compounds) in the context of a sentence. The task attracted 14 teams who created 15 systems, all of which used supervised machine learning with features that were lexicon-based, corpus-based, or both.
|
{
"cite_N": [
"@cite_2"
],
"mid": [
"2152358231"
],
"abstract": [
"The NLP community has shown a renewed interest in deeper semantic analyses, among them automatic recognition of relations between pairs of words in a text. We present an evaluation task designed to provide a framework for comparing different approaches to classifying semantic relations between nominals in a sentence. This is part of SemEval, the 4th edition of the semantic evaluation event previously known as SensEval. We define the task, describe the training test data and their creation, list the participating systems and discuss their results. There were 14 teams who submitted 15 systems."
]
}
|
1107.3600
|
1586075948
|
In many scientific disciplines structures in high-dimensional data have to be found, e.g., in stellar spectra, in genome data, or in face recognition tasks. In this work we present a novel approach to non-linear dimensionality reduction. It is based on fitting K-nearest neighbor regression to the unsupervised regression framework for learning of low-dimensional manifolds. Similar to related approaches that are mostly based on kernel methods, unsupervised K-nearest neighbor (UNN) regression optimizes latent variables w.r.t. the data space reconstruction error employing the K-nearest neighbor heuristic. The problem of optimizing latent neighborhoods is difficult to solve, but the UNN formulation allows the design of efficient strategies that iteratively embed latent points to fixed neighborhood topologies. UNN is well appropriate for sorting of high-dimensional data. The iterative variants are analyzed experimentally.
|
Many dimensionality reduction methods have been proposed, a very famous one is principal component analysis (PCA), which assumes linearity of the manifold @cite_8 @cite_12 . An extension for learning of non-linear manifolds is kernel PCA @cite_1 that projects the data into a Hilbert space. Further famous approaches for manifold learning are Isomap by Tenenbaum, Silva, and Langford @cite_5 , locally linear embedding (LLE) by Roweis and Saul @cite_2 , and principal curves by Hastie and Stuetzle @cite_9 .
|
{
"cite_N": [
"@cite_8",
"@cite_9",
"@cite_1",
"@cite_2",
"@cite_5",
"@cite_12"
],
"mid": [
"51824616",
"1824657313",
"2140095548",
"2053186076",
"2001141328",
"2294798173"
],
"abstract": [
"",
"The nearest neighbor (NN) technique is very simple, highly efficient and effective in the field of pattern recognition, text categorization, object recognition etc. Its simplicity is its main advantage, but the disadvantages can't be ignored even. The memory requirement and computation complexity also matter. Many techniques are developed to overcome these limitations. NN techniques are broadly classified into structure less and structure based techniques. In this paper, we present the survey of such techniques. Weighted kNN, Model based kNN, Condensed NN, Reduced NN, Generalized NN are structure less techniques whereas k-d tree, ball tree, Principal Axis Tree, Nearest Feature Line, Tunable NN, Orthogonal Search Tree are structure based algorithms developed on the basis of kNN. The structure less method overcome memory limitation and structure based techniques reduce the computational complexity.",
"A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map—for instance, the space of all possible five-pixel products in 16 × 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.",
"Many areas of science depend on exploratory data analysis and visualization. The need to analyze large amounts of multivariate data raises the fundamental problem of dimensionality reduction: how to discover compact representations of high-dimensional data. Here, we introduce locally linear embedding (LLE), an unsupervised learning algorithm that computes low-dimensional, neighborhood-preserving embeddings of high-dimensional inputs. Unlike clustering methods for local dimensionality reduction, LLE maps its inputs into a single global coordinate system of lower dimensionality, and its optimizations do not involve local minima. By exploiting the local symmetries of linear reconstructions, LLE is able to learn the global structure of nonlinear manifolds, such as those generated by images of faces or documents of text. How do we judge similarity? Our mental representations of the world are formed by processing large numbers of sensory in",
"Scientists working with large volumes of high-dimensional data, such as global climate patterns, stellar spectra, or human gene distributions, regularly confront the problem of dimensionality reduction: finding meaningful low-dimensional structures hidden in their high-dimensional observations. The human brain confronts the same problem in everyday perception, extracting from its high-dimensional sensory inputs—30,000 auditory nerve fibers or 106 optic nerve fibers—a manageably small number of perceptually relevant features. Here we describe an approach to solving dimensionality reduction problems that uses easily measured local metric information to learn the underlying global geometry of a data set. Unlike classical techniques such as principal component analysis (PCA) and multidimensional scaling (MDS), our approach is capable of discovering the nonlinear degrees of freedom that underlie complex natural observations, such as human handwriting or images of a face under different viewing conditions. In contrast to previous algorithms for nonlinear dimensionality reduction, ours efficiently computes a globally optimal solution, and, for an important class of data manifolds, is guaranteed to converge asymptotically to the true structure.",
""
]
}
|
1107.3600
|
1586075948
|
In many scientific disciplines structures in high-dimensional data have to be found, e.g., in stellar spectra, in genome data, or in face recognition tasks. In this work we present a novel approach to non-linear dimensionality reduction. It is based on fitting K-nearest neighbor regression to the unsupervised regression framework for learning of low-dimensional manifolds. Similar to related approaches that are mostly based on kernel methods, unsupervised K-nearest neighbor (UNN) regression optimizes latent variables w.r.t. the data space reconstruction error employing the K-nearest neighbor heuristic. The problem of optimizing latent neighborhoods is difficult to solve, but the UNN formulation allows the design of efficient strategies that iteratively embed latent points to fixed neighborhood topologies. UNN is well appropriate for sorting of high-dimensional data. The iterative variants are analyzed experimentally.
|
In the following, we give a short introduction to K-nearest neighbor regression that is basis of the UNN approach. The problem in regression is to predict output values @math to given input values @math based on sets of @math input-output examples @math . The goal is to learn a function @math known as regression function. We assume that a data set consisting of observed pairs @math is given. For a novel pattern @math , KNN regression computes the mean of the function values of its K-nearest neighbors: with set @math containing the indices of the @math -nearest neighbors of @math . The idea of KNN is based on the assumption of locality in data space: In local neighborhoods of @math patterns are expected to have similar output values @math (or class labels) to @math . Consequently, for an unknown @math the label must be similar to the labels of the closest patterns, which is modeled by the average of the output value of the @math nearest samples. KNN has been proven well in various applications, e.g., in detection of quasars in interstellar data sets @cite_6 .
|
{
"cite_N": [
"@cite_6"
],
"mid": [
"2150709118"
],
"abstract": [
"We present a classification-based approach to identify quasi-stellar radio sources (quasars) in the Sloan Digital Sky Survey and evaluate its performance on a manually labeled training set. While reasonable results can already be obtained via approaches working only on photometric data, our experiments indicate that simple but problem-specific features extracted from spectroscopic data can significantly improve the classification performance. Since our approach works orthogonal to existing classification schemes used for building the spectroscopic catalogs, our classification results are well suited for a mutual assessment of the approaches' accuracies."
]
}
|
1107.2867
|
2951313487
|
Low density parity-check (LDPC) codes are a class of linear block codes that are decoded by running belief propagation (BP) algorithm or log-likelihood ratio belief propagation (LLR-BP) over the factor graph of the code. One of the disadvantages of LDPC codes is the onset of an error floor at high values of signal to noise ratio caused by trapping sets. In this paper, we propose a two stage decoder to deal with different types of trapping sets. Oscillating trapping sets are taken care by the first stage of the decoder and the elementary trapping sets are handled by the second stage of the decoder. Simulation results on regular PEG (504,252,3,6) code shows that the proposed two stage decoder performs significantly better than the standard decoder.
|
In @cite_3 , the problems caused by the trapping sets and why the decoder fails in overcoming these problems are well studied. @cite_3 , @cite_2 propose an averaging decoder that averages the LLR value of the node over several iterations. Averaging prevents the erroneous information from being trapped in the code graph by slowing down the convergence speed of the nodes. This method though computationally less complex, it slows down the convergence of the reliable nodes. This affects the performance of the decoder in the waterfall region as oscillating trapping sets are prevalent in this region than the elementary trapping sets.
|
{
"cite_N": [
"@cite_3",
"@cite_2"
],
"mid": [
"1577660440",
"2154287427"
],
"abstract": [
"Several combinatorial properties of low-density parity-check (LDPC) codes, such as minimum distance, diameter, stopping number, girth and cycle-length distribution of the corresponding Tanner graph, are known to influence their performance under iterative decoding. Recently, a new class of combinatorial configurations, termed trapping sets, was shown to be of significant importance in determining the properties of LDPC codes in the error-floor region. Very little is known both about the existence parameters of trapping sets in structured LDPC codes and about possible techniques for reducing their negative influence on the code's performance. In this paper, we address both these problems from an algorithmic and combinatorial perspective. We first provide a numerical study of the trapping phenomena for the Margulis code, which exhibits a fairly high error-floor. Based on this analysis, conducted for two different implementations of iterative belief propagation, we propose a novel decoding process, termed averaged decoding. Averaged decoding provides for a significant reduction in the number of incorrectly decoded frames in the error-floor region of the Margulis code. Furthermore, based on the results of the algorithmic approach, we suggest a novel combinatorial characterizations of trapping sets in the class of LDPC codes based on finite geometries. Projective geometry LDPC codes are suspected to have extremely low error-floors, which is a property that we may attribute to the non-existence of certain small trapping sets in the code graph.",
"We generalize the notion of the stopping redundancy in order to study the smallest size of a trapping set in Tanner graphs of linear block codes. In this context, we introduce the notion of the trapping redundancy of a code, which quantifies the relationship between the number of redundant rows in any parity-check matrix of a given code and the size of its smallest trapping set. Trapping sets with certain parameter sizes are known to cause error-floors in the performance curves of iterative belief propagation (BP) decoders, and it is therefore important to identify decoding matrices that avoid such sets. Bounds on the trapping redundancy are obtained using probabilistic and constructive methods, and the analysis covers both general and elementary trapping sets. Numerical values for these bounds are computed for the [2640, 1320] Margulis code and the class of projective geometry codes, and compared with some new code-specific trapping set size estimates."
]
}
|
1107.2867
|
2951313487
|
Low density parity-check (LDPC) codes are a class of linear block codes that are decoded by running belief propagation (BP) algorithm or log-likelihood ratio belief propagation (LLR-BP) over the factor graph of the code. One of the disadvantages of LDPC codes is the onset of an error floor at high values of signal to noise ratio caused by trapping sets. In this paper, we propose a two stage decoder to deal with different types of trapping sets. Oscillating trapping sets are taken care by the first stage of the decoder and the elementary trapping sets are handled by the second stage of the decoder. Simulation results on regular PEG (504,252,3,6) code shows that the proposed two stage decoder performs significantly better than the standard decoder.
|
In @cite_7 , the BP algorithm is well studied in the point of view of the Bethe energy and how it fails in the presence of cycles. They propose a BP algorithm with a tunable parameter @math and a modification to the outgoing message from the variable node. On adjusting the parameter @math at different SNR points, they are able to achieve better performance than the standard BP decoder. This modification also follows the same principle as the averaging decoder by slowing down the information flow to prevent the trapping of the erroneous information.
|
{
"cite_N": [
"@cite_7"
],
"mid": [
"1668142578"
],
"abstract": [
"The decoding of Low-Density Parity-Check codes by the Belief Propagation (BP) algorithm is revisited. We check the iterative algorithm for its convergence to a codeword (termination), we run Monte Carlo simulations to find the probability distribution function of the termination time, n_it. Tested on an example [155, 64, 20] code, this termination curve shows a maximum and an extended algebraic tail at the highest values of n_it. Aiming to reduce the tail of the termination curve we consider a family of iterative algorithms modifying the standard BP by means of a simple relaxation. The relaxation parameter controls the convergence of the modified BP algorithm to a minimum of the Bethe free energy. The improvement is experimentally demonstrated for Additive-White-Gaussian-Noise channel in some range of the signal-to-noise ratios. We also discuss the trade-off between the relaxation parameter of the improved iterative scheme and the number of iterations."
]
}
|
1107.2867
|
2951313487
|
Low density parity-check (LDPC) codes are a class of linear block codes that are decoded by running belief propagation (BP) algorithm or log-likelihood ratio belief propagation (LLR-BP) over the factor graph of the code. One of the disadvantages of LDPC codes is the onset of an error floor at high values of signal to noise ratio caused by trapping sets. In this paper, we propose a two stage decoder to deal with different types of trapping sets. Oscillating trapping sets are taken care by the first stage of the decoder and the elementary trapping sets are handled by the second stage of the decoder. Simulation results on regular PEG (504,252,3,6) code shows that the proposed two stage decoder performs significantly better than the standard decoder.
|
A two stage backtracking decoder was proposed in @cite_9 @cite_1 . The fact that an unsatisfied check node is connected to odd number of variable nodes in trapping set is used to construct a matching set @math . Then each variable node that belong to the matching set @math is flipped seperately and the first stage is run. This process is repeated till the error is solved or all the nodes in the matching set have been exhausted.
|
{
"cite_N": [
"@cite_9",
"@cite_1"
],
"mid": [
"2115922584",
"2142490958"
],
"abstract": [
"In iterative decoding of LDPC codes, trapping sets often lead to high error floors. In this work, we propose a two-stage iterative decoding to break trapping sets. Simulation results show that the error floor performance can be significantly improved with this decoding scheme.",
"Error-floors are the main reason for excluding LDPC codes from applications requiring very low bit-error rate. They are attributed to a particular structure in the codes' Tanner graphs, known as trapping sets, which traps the message-passing algorithms commonly used to decode LDPC codes, and prevents decoding from converging to the correct codeword. A technique is proposed to break trapping sets while decoding. Based on decoding results leading to a decoding failure, some bits are identified in a previous iteration and flipped and decoding is restarted. This backtracking may enable the decoder to get out of the trapped state. A semi-analytical method is also proposed to predict the error-floor after backtracking. Simulation results indicate the effectiveness of the proposed technique in lowering the error-floor. The technique, which has moderate complexity overhead, is applicable to any code without requiring a prior knowledge of the structure of its trapping sets."
]
}
|
1107.2702
|
2952988819
|
We consider a basic problem in unsupervised learning: learning an unknown . A Poisson Binomial Distribution (PBD) over @math is the distribution of a sum of @math independent Bernoulli random variables which may have arbitrary, potentially non-equal, expectations. These distributions were first studied by S. Poisson in 1837 Poisson:37 and are a natural @math -parameter generalization of the familiar Binomial Distribution. Surprisingly, prior to our work this basic learning problem was poorly understood, and known results for it were far from optimal. We essentially settle the complexity of the learning problem for this basic class of distributions. As our first main result we give a highly efficient algorithm which learns to @math -accuracy (with respect to the total variation distance) using @math samples . The running time of the algorithm is in the size of its input data, i.e., @math bit-operations. (Observe that each draw from the distribution is a @math -bit string.) Our second main result is a proper learning algorithm that learns to @math -accuracy using @math samples, and runs in time @math . This is nearly optimal, since any algorithm for this problem must use @math samples. We also give positive and negative results for some extensions of this learning problem to weighted sums of independent Bernoulli random variables.
|
Many results in probability theory study approximations to the Poisson Binomial distribution via simpler distributions. In a well-known result, Le Cam @cite_24 shows that for any PBD @math with @math [ (X, (p_1 + + p_n)) 2 i=1 ^n p_i^2, ] where @math denotes the Poisson distribution with parameter @math . Subsequently many other proofs of this result and similar ones were given using a range of different techniques; @cite_28 @cite_4 @cite_14 @cite_10 is a sampling of work along these lines, and Steele @cite_12 gives an extensive list of relevant references. Significant work has also been done on approximating PBDs by normal distributions (see e.g. @cite_37 @cite_21 @cite_22 @cite_40 ) and by Binomial distributions (see e.g. @cite_15 @cite_9 @cite_38 ). These results provide structural information about PBDs that can be well-approximated via simpler distributions, but fall short of our goal of obtaining approximations of a general, unknown PBD up to an arbitrary accuracy . Indeed, the approximations obtained in the probability literature (such as, the Poisson, Normal and Binomial approximations) typically depend on the first few moments of the target PBD, while higher moments are crucial for arbitrary approximation @cite_38 .
|
{
"cite_N": [
"@cite_38",
"@cite_37",
"@cite_14",
"@cite_4",
"@cite_22",
"@cite_28",
"@cite_9",
"@cite_21",
"@cite_24",
"@cite_40",
"@cite_15",
"@cite_10",
"@cite_12"
],
"mid": [
"2094799272",
"2020658897",
"2017388832",
"2067676737",
"",
"2032059218",
"2337787166",
"",
"2095575865",
"",
"2063522056",
"",
"2052974900"
],
"abstract": [
"The Poisson binomial distribution is approximated by a binomial distribution and also by finite signed measures resulting from the corresponding Krawtchouk expansion. Bounds and asymptotic relations for the total variation distance and the point metric are given.",
"The sum of finitely many variates possesses, under familiar conditions, an almost Gaussian probability distribution. This already much discussed \"central limit theorem\"(x) in the theory of probability is the object of further investigation in the present paper. The cases of Liapounoff(2), Lindeberg(3), and Feller(4) will be reviewed. Numerical estimates for the degrees of approximation attained in these cases will be presented in the three theorems of §4. Theorem 3, the arithmetical refinement of the general theorem of Feller, constitutes our principal result. As the foregoing implies, we require throughout the paper that the given variates be totally independent. And we consider only one-dimensional variates. The first three sections of the paper are devoted to the preparatory Theorem 1 in which the variates.meet the further condition of possessing finite third order absolute moments. Let X , Xi, • • • , Xn be the given variates. For each k k = ,2, ■ ■ ■ , n) let ^(Xk) and ixs Xk) denote, respectively, the second and third order absolute moments of Xk about its mean (expected) value a*. These moments are either both zero or both positive. The former case arises only when Xk is essentially constant, i.e., differs from its mean value at most in cases of total probability zero. To avoid trivialities we suppose that PziXk) >0 for at least one k (k = 1, 2, • • • , n). The non-negative square root of m Xk) is the standard deviation of Xk and will be denoted by ak. We call",
"",
"",
"",
"The simple example (n = 1) is given for classroom presentation when teaching UMPU tests. Along with the more general example (any n), it is intended to dispel a common belief that the labels UMP and UMPU are synonymous. A student could thus be encouraged to look beyond a single best test among unbiased tests by trying alternative methods. The issue will then be one of practicality: The importance of using an unbiased test that is not very powerful must be weighed against the power superiority, in a large subset of the alternative parameter space, of a biased test. 4. CONCLUSION",
"A binomial approximation theorem for dependent indicators using Stein's method and coupling is proved. The approximating binomial distribution B(n � ,p � ) is chosen in such a way that its first moment is equal to that of W and its variance is asymptotically equal to that of W as ntends to infinity where W is the sum of independent indicators and pis bounded away from 1. Three examples, one of which concerns two different approximations for the hypergeometric distribution, are given to illustrate applications of the theorem obtained.",
"",
"",
"",
"Upper and lower bounds are given for the total variation distance between the distribution of a sum S of n independent, non-identically distributed 0-1 random variables and the binomial distribution (n, p) having the same expectation as S. The proof uses the Stein--Chen technique. Equivalence of the total variation and the Kolmogorov distance is established, and an application to sampling with and without replacement is presented.",
"",
"where A = P1 + P2 + * .. + Pn Naturally, this inequality contains the classical Poisson limit law (Just set pi = A n and note that the right side simplifies to 2A2 n), but it also achieves a great deal more. In particular, Le Cam's inequality identifies the sum of the squares of the pi as a quantity governing the quality of the Poisson approximation. Le Cam's inequality also seems to be one of those facts that repeatedly calls to be proved-and improved. Almost before the ink was dry on Le Cam's 1960 paper, an elementary proof was given by Hodges and Le Cam [18]. This proof was followed by numerous generalizations and refinements including contributions by Kerstan [19], Franken [15], Vervatt [30], Galambos [17], Freedman [16], Serfling [24], and Chen [11, 12]. In fact, for raw simplicity it is hard to find a better proof of Le Cam's inequality than that given in the survey of Serfling [25]. One purpose of this note is to provide a proof of Le Cam's inequality using some basic facts from matrix analysis. This proof is simple, but simplicity is not its raison d'etre. It also serves as a concrete introduction to the semi-group method for approximation of probability distributions. This method was used in Le Cam [20], and it has been used again most recently by Deheuvels and Pfeifer [13] to provide impressively precise results. The semi-group method is elegant and powerful, but it faces tough competition, especially from the coupling method and the Chen-Stein method. The literature of these methods is reviewed, and it is shown how they also lead to proofs of Le Cam's inequality."
]
}
|
1107.2702
|
2952988819
|
We consider a basic problem in unsupervised learning: learning an unknown . A Poisson Binomial Distribution (PBD) over @math is the distribution of a sum of @math independent Bernoulli random variables which may have arbitrary, potentially non-equal, expectations. These distributions were first studied by S. Poisson in 1837 Poisson:37 and are a natural @math -parameter generalization of the familiar Binomial Distribution. Surprisingly, prior to our work this basic learning problem was poorly understood, and known results for it were far from optimal. We essentially settle the complexity of the learning problem for this basic class of distributions. As our first main result we give a highly efficient algorithm which learns to @math -accuracy (with respect to the total variation distance) using @math samples . The running time of the algorithm is in the size of its input data, i.e., @math bit-operations. (Observe that each draw from the distribution is a @math -bit string.) Our second main result is a proper learning algorithm that learns to @math -accuracy using @math samples, and runs in time @math . This is nearly optimal, since any algorithm for this problem must use @math samples. We also give positive and negative results for some extensions of this learning problem to weighted sums of independent Bernoulli random variables.
|
Taking a different perspective, it is easy to show (see Section 2 of @cite_13 ) that every PBD is a unimodal distribution over @math . The learnability of general unimodal distributions over @math is well understood: Birg 'e @cite_36 @cite_5 has given a computationally efficient algorithm that can learn any unimodal distribution over @math to variation distance @math from @math samples, and has shown that any algorithm must use @math samples. (The @cite_36 lower bound is stated for continuous unimodal distributions, but the arguments are easily adapted to the discrete case.) Our main result, Theorem , shows that the additional PBD assumption can be leveraged to obtain sample complexity with a computationally highly efficient algorithm.
|
{
"cite_N": [
"@cite_36",
"@cite_5",
"@cite_13"
],
"mid": [
"2014255551",
"2126204693",
"1979848870"
],
"abstract": [
"On considere la classe de toutes les densites unimodales definies sur un intervalle de longueur L et borne par H; on etudie le risque minimax sur cette classe, lorsqu'on estime en utilisant n observations i.i.d., la perte etant mesuree par la distance L 1 entre l'estimateur et la densite reelle",
"The Grenander estimator of a decreasing density, which is defined as the derivative of the concave envelope of the empirical c.d.f., is known to be a very good estimator of an unknown decreasing density on the half-line R + when this density is not assumed to be smooth. It is indeed the maximum likelihood estimator and one can get precise upper bounds for its risk when the loss is measured by the L 1 -distance between densities. Moreover, if one restricts oneself to the compact subsets of decreasing densities bounded by H with support on [0, L] the risk of this estimator is within a fixed factor of the minimax risk. The same is true if one deals with the maximum likelihood estimator for unimodal densities with known mode. When the mode is unknown, the maximum likelihood estimator does not exist any more. We shall provide a general purpose estimator (together with a computational algorithm) for estimating nonsmooth unimodal densities. Its risk is the same as the risk of the Grenander estimator based on the knowledge of the true mode plus some lower order term. It can also cope with small departures from unimodality.",
"Abstract In a classical theorem, Ibragimov demonstrated the strong unimodality of log-concave probability density functions. Comparable results for lattice distributions are exhibited and their potential significance is suggested."
]
}
|
1107.2972
|
2089726881
|
Motivated by the Markov Chain Monte Carlo (MCMC) approach to the compression of discrete sources developed by Jalali and Weissman, we propose a lossy compression algorithm for analog sources that relies on a finite reproduction alphabet, which grows with the input length. The algorithm achieves, in an appropriate asymptotic sense, the optimum Shannon theoretic tradeoff between rate and distortion, universally for stationary ergodic continuous amplitude sources. We further propose an MCMC-based algorithm that resorts to a reduced reproduction alphabet when such reduction does not prevent achieving the Shannon limit. The latter algorithm is advantageous due to its reduced complexity and improved rates of convergence when employed on sources with a finite and small optimum reproduction alphabet.
|
For analog sources , less progress has been made in developing theoretically-justified compression algorithms. Some results have been derived specifically for the high-rate regime, where the Shannon lower bound is asymptotically tight @cite_9 . In particular, in the limit of low distortions the RD limit has been characterized for mixtures of probability distribution functions (pdf's) where one distribution is discrete and the other continuous @cite_28 @cite_21 . For example, the sparse Gaussian source is a mixture pdf; bounds on its RD function have been provided @cite_26 @cite_6 @cite_30 @cite_24 .
|
{
"cite_N": [
"@cite_30",
"@cite_26",
"@cite_28",
"@cite_9",
"@cite_21",
"@cite_6",
"@cite_24"
],
"mid": [
"2165434382",
"2134568086",
"2144797521",
"2157962569",
"2114361399",
"2164187391",
"2138420014"
],
"abstract": [
"Modern image coders balance bitrate used for encoding the location of significant transform coefficients, and bitrate used for coding their values. The importance of balancing location and value information in practical coders raises fundamental open questions about how to code even simple processes with joint uncertainty in coefficient location and magnitude. The most basic example of such a process is studied: a 2-D process studied earlier by Weidmann and Vetterli that combines Gaussian magnitude information with Bernoulli location uncertainty. An insight into the coding of this process by investigating several new coding strategies based on more general approaches to lossy compression of location is presented. Extending these ideas to practical coding, a trellis-coded quantization algorithm with performance matching the published theoretical bounds is developed. Finally, the quality of the strategies is evaluated by deriving a rate-distortion bound using Blahut's algorithm for discrete sources.",
"The rate distortion behavior of sparse memoryless sources is studied. These serve as models of sparse signal representations and facilitate the performance analysis of “sparsifying” transforms like the wavelet transform and nonlinear approximation schemes. For strictly sparse binary sources with Hamming distortion, R(D) is shown to be almost linear. For nonstrictly sparse continuous-valued sources, termed compressible, two measures of compressibility are introduced: incomplete moments and geometric mean. The former lead to low- and high-rate upper bounds on mean squared error D(R), while the latter yields lower and upper bounds on source entropy, thereby characterizing asymptotic R(D) behavior. Thus, the notion of compressibility is quantitatively connected with actual lossy compression. These bounding techniques are applied to two source models: Gaussian mixtures and power laws matching the approximately scale-invariant decay of wavelet coefficients. The former are versatile models for sparse data, which in particular allow to bound high-rate compression performance of a scalar mixture compared to a corresponding unmixed transform coding system. Such a comparison is interesting for transforms with known coefficient decay, but unknown coefficient ordering, e.g., when positions of highest-variance coefficients are unknown. The use of these models and results in distributed coding and compressed sensing scenarios are also discussed.",
"The asymptotic (small distortion) behavior of the rate-distortion function of an n-dimensional source vector with mixed distribution is derived. The source distribution is a finite mixture of components such that under each component distribution a certain subset of the coordinates have a discrete distribution while the remaining coordinates have a joint density. The expected number of coordinates with a joint density is shown to equal the rate-distortion dimension of the source vector. Also, the exact small distortion asymptotic behavior of the rate-distortion function of a special but interesting class of stationary information sources is determined.",
"New results are proved on the convergence of the Shannon (1959) lower bound to the rate distortion function as the distortion decreases to zero. The key convergence result is proved using a fundamental property of informational divergence. As a corollary, it is shown that the Shannon lower bound is asymptotically tight for norm-based distortions, when the source vector has a finite differential entropy and a finite spl alpha th moment for some spl alpha >0, with respect to the given norm. Moreover, we derive a theorem of Linkov (1965) on the asymptotic tightness of the Shannon lower bound for general difference distortion measures with more relaxed conditions on the source density. We also show that the Shannon lower bound relative to a stationary source and single-letter difference distortion is asymptotically tight under very weak assumptions on the source distribution. >",
"The asymptotic behavior as epsilon approaches 0 of the epsilon -entropy of a mixed random variable and a finite-dimensional mixed random vector is explicitly investigated. It is shown that the asymptotic behavior of this epsilon -entropy resembles that of a continuous random variable multiplied by the relative weight of the continuous part. >",
"Recent rate-distortion analyses of image transform coders are based on a trade-off between the lossless coding of coefficient positions versus the lossy coding of the coefficient values. We propose spike processes as a tool that allows a more fundamental trade-off, namely between lossy position coding and lossy value coding. We investigate the Hamming distortion case and give analytic results for single and multiple spikes. We then consider upper bounds for a single Gaussian spike with squared error distortion. The obtained results show a rate distortion behavior which switches from linear at low rates to exponential at high rates.",
"We study the rate distortion function of the Bernoulli-Gaussian random variable which can be used to model sparse signals. Both lower and upper bounds on the rate distortion function are given.We show that the bounds are almost tight in the low distortion regime for sparse signals. Interestingly, a naive coding scheme is near-optimal in this scenario."
]
}
|
1107.2972
|
2089726881
|
Motivated by the Markov Chain Monte Carlo (MCMC) approach to the compression of discrete sources developed by Jalali and Weissman, we propose a lossy compression algorithm for analog sources that relies on a finite reproduction alphabet, which grows with the input length. The algorithm achieves, in an appropriate asymptotic sense, the optimum Shannon theoretic tradeoff between rate and distortion, universally for stationary ergodic continuous amplitude sources. We further propose an MCMC-based algorithm that resorts to a reduced reproduction alphabet when such reduction does not prevent achieving the Shannon limit. The latter algorithm is advantageous due to its reduced complexity and improved rates of convergence when employed on sources with a finite and small optimum reproduction alphabet.
|
Despite the theoretical insights in the high-rate regime, compression of analog sources at low-to-medium rates is of interest in many applications @cite_2 @cite_27 @cite_29 @cite_12 . There do exist special input pdf's for which entropy coding approaches the RD function @cite_0 in the low-rate limit, but the low-rate regime is challenging in general. We aspire to develop results of general applicability and not be limited to specific pdf's with fortuitous properties.
|
{
"cite_N": [
"@cite_29",
"@cite_0",
"@cite_27",
"@cite_2",
"@cite_12"
],
"mid": [
"2140199336",
"2161225693",
"2113905985",
"2163935396",
"2002182716"
],
"abstract": [
"H.264 AVC is newest video coding standard of the ITU-T Video Coding Experts Group and the ISO IEC Moving Picture Experts Group. The main goals of the H.264 AVC standardization effort have been enhanced compression performance and provision of a \"network-friendly\" video representation addressing \"conversational\" (video telephony) and \"nonconversational\" (storage, broadcast, or streaming) applications. H.264 AVC has achieved a significant improvement in rate-distortion efficiency relative to existing standards. This article provides an overview of the technical features of H.264 AVC, describes profiles and applications for the standard, and outlines the history of the standardization process.",
"This correspondence analyzes the low-resolution performance of entropy-constrained scalar quantization. It focuses mostly on Gaussian sources, for which it is shown that for both binary quantizers and infinite-level uniform threshold quantizers, as D approaches the source variance spl sigma sup 2 , the least entropy of such quantizers with mean-squared error D or less approaches zero with slope -log sub 2 e 2 spl sigma sup 2 . As the Shannon rate-distortion function approaches zero with the same slope, this shows that in the low-resolution region, scalar quantization with entropy coding is asymptotically as good as any coding technique.",
"We introduce a new image compression paradigm that combines compression efficiency with speed, and is based on an independent \"infinite\" mixture model which accurately captures the space-frequency characterization of the wavelet image representation. Specifically, we model image wavelet coefficients as being drawn from an independent generalized Gaussian distribution field, of fixed unknown shape for each subband, having zero mean and unknown slowly spatially-varying variances. Based on this model, we develop a powerful \"on the fly\" estimation-quantization (EQ) framework that consists of: (i) first finding the maximum-likelihood estimate of the individual spatially-varying coefficient field variances based on causal and quantized spatial neighborhood contexts; and (ii) then applying an off-line rate-distortion (R-D) optimized quantization entropy coding strategy, implemented as a fast lookup table, that is optimally matched to the derived variance estimates. A distinctive feature of our paradigm is the dynamic switching between forward and backward adaptation modes based on the reliability of causal prediction contexts. The performance of our coder is extremely competitive with the best published results in the literature across diverse classes of images and target bitrates of interest, in both compression efficiency and processing speed. For example, our coder exceeds the objective performance of the best zerotree-based wavelet coder based on space-frequency-quantization at all bit rates for all tested images at a fraction of its complexity.",
"A new class of image coding algorithms coupling standard scalar quantization of frequency coefficients with tree-structured quantization (related to spatial structures) has attracted wide attention because its good performance appears to confirm the promised efficiencies of hierarchical representation. This paper addresses the problem of how spatial quantization modes and standard scalar quantization can be applied in a jointly optimal fashion in an image coder. We consider zerotree quantization (zeroing out tree-structured sets of wavelet coefficients) and the simplest form of scalar quantization (a single common uniform scalar quantizer applied to all nonzeroed coefficients), and we formalize the problem of optimizing their joint application. We develop an image coding algorithm for solving the resulting optimization problem. Despite the basic form of the two quantizers considered, the resulting algorithm demonstrates coding performance that is competitive, often outperforming the very best coding algorithms in the literature.",
"Quantization, the process of approximating continuous-amplitude signals by digital (discrete-amplitude) signals, is an important aspect of data compression or coding, the field concerned with the reduction of the number of bits necessary to transmit or store analog data, subject to a distortion or fidelity criterion. The independent quantization of each signal value or parameter is termed scalar quantization, while the joint quantization of a block of parameters is termed block or vector quantization. This tutorial review presents the basic concepts employed in vector quantization and gives a realistic assessment of its benefits and costs when compared to scalar quantization. Vector quantization is presented as a process of redundancy removal that makes effective use of four interrelated properties of vector parameters: linear dependency (correlation), nonlinear dependency, shape of the probability density function (pdf), and vector dimensionality itself. In contrast, scalar quantization can utilize effectively only linear dependency and pdf shape. The basic concepts are illustrated by means of simple examples and the theoretical limits of vector quantizer performance are reviewed, based on results from rate-distortion theory. Practical issues relating to quantizer design, implementation, and performance in actual applications are explored. While many of the methods presented are quite general and can be used for the coding of arbitrary signals, this paper focuses primarily on the coding of speech signals and parameters."
]
}
|
1107.2972
|
2089726881
|
Motivated by the Markov Chain Monte Carlo (MCMC) approach to the compression of discrete sources developed by Jalali and Weissman, we propose a lossy compression algorithm for analog sources that relies on a finite reproduction alphabet, which grows with the input length. The algorithm achieves, in an appropriate asymptotic sense, the optimum Shannon theoretic tradeoff between rate and distortion, universally for stationary ergodic continuous amplitude sources. We further propose an MCMC-based algorithm that resorts to a reduced reproduction alphabet when such reduction does not prevent achieving the Shannon limit. The latter algorithm is advantageous due to its reduced complexity and improved rates of convergence when employed on sources with a finite and small optimum reproduction alphabet.
|
* -8mm : Comparison of entropy coding (ECSQ), results by Yang and Zhang @cite_34 , average rate and distortion of Algorithm 2 (MCMC) over 10 simulations, and the RD function. ( @math , @math , @math , @math .) * -5mm
|
{
"cite_N": [
"@cite_34"
],
"mid": [
"2100540720"
],
"abstract": [
"The fixed slope lossy algorithm derived from the kth-order adaptive arithmetic codeword length function is extended to finite-state decoders or trellis-structured decoders. When this algorithm is used to encode a stationary, ergodic source with a continuous alphabet, the Lagrangian performance converges with probability one to a quantity computable as the infimum of an information-theoretic functional over a set of auxiliary random variables and reproduction levels, where spl lambda >0 and - spl lambda are designated to be the slope of the rate distortion function R(D) of the source at some D; the quantity is close to R(D)+ spl lambda D when the order k used in the arithmetic coding or the number of states in the decoders is large enough, An alternating minimization algorithm for computing the quantity is presented; this algorithm is based on a training sequence and in turn gives rise to a design algorithm for variable-rate trellis source codes. The resulting variable-rate trellis source codes are very efficient in low-rate regions. With k=8, the mean-squared error encoding performance at the rate 1 2 bits sample for memoryless Gaussian sources is comparable to that afforded by trellis-coded quantizers; with k=8 and the number of states in the decoder=32, the mean-squared error encoding performance at the rate 1 2 bits sample for memoryless Laplacian sources is about 1 dB better than that afforded by the trellis-coded quantizers with 256 states, with k=8 and the number of states in the decoder=256, the mean-squared error encoding performance at the rates of a fraction of 1 bit sample for highly dependent Gauss-Markov sources with correlation coefficient 0.9 is within about 0.6 dB of the distortion rate function."
]
}
|
1107.2432
|
2953320191
|
We introduce the Funding Game, in which @math identical resources are to be allocated among @math selfish agents. Each agent requests a number of resources @math and reports a valuation @math , which verifiably lower -bounds @math 's true value for receiving @math items. The pairs @math can be thought of as size-value pairs defining a knapsack problem with capacity @math . A publicly-known algorithm is used to solve this knapsack problem, deciding which requests to satisfy in order to maximize the social welfare. We show that a simple mechanism based on the knapsack highest ratio greedy algorithm provides a Bayesian Price of Anarchy of 2, and for the complete information version of the game we give an algorithm that computes a Nash equilibrium strategy profile in @math time. Our primary algorithmic result shows that an extension of the mechanism to @math rounds has a Price of Anarchy of @math , yielding a graceful tradeoff between communication complexity and the social welfare.
|
-.15cm This is the most common assumption in the algorithmic mechanism design literature. Multi-unit auctions model the situation where a verification mechanism does not exist and thus agents must be assumed dishonest. Truthfulness can be achieved through VCG payments, but doing so depends on solving the allocation problem optimally, which may be intractable. Starting with the work of Nisan and Ronen @cite_7 , the field of algorithmic mechanism design has sought to reconcile selfishness with computational complexity. Multi-unit auctions have been studied extensively in this context, including truthful mechanisms for single-minded bidders @cite_18 @cite_1 , and @math -minded bidders @cite_19 @cite_6 @cite_4 .
|
{
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_7",
"@cite_1",
"@cite_6",
"@cite_19"
],
"mid": [
"2099584630",
"1924265123",
"2012634103",
"1982492546",
"",
"2103751307"
],
"abstract": [
"When attempting to design a truthful mechanism for a computationally hard problem such as combinatorial auctions, one is faced with the problem that most efficiently computable heuristics can not be embedded in any truthful mechanism (e.g. VCG-like payment rules will not ensure truthfulness).We develop a set of techniques that allow constructing efficiently computable truthful mechanisms for combinatorial auctions in the special case where only the valuation is unknown by the mechanism (the single parameter case). For this case we extend the work of Lehmann O'Callaghan, and Shoham, who presented greedy heuristics, and show how to use IF-THEN-ELSE constructs, perform a partial search, and use the LP relaxation. We apply these techniques for several types of combinatorial auctions, obtaining truthful mechanisms with provable approximation ratios.",
"We exhibit incentive compatible multi-unit auctions that are not affine maximizers (i.e., are not of the VCG family) and yet approximate the social welfare to within a factor of 1+e. For the case of two-item two-bidder auctions we show that these auctions, termed Triage auctions, are the only scalable ones that give an approximation factor better than 2. \"Scalable\" means that the allocation does not depend on the units in which the valuations are measured. We deduce from this that any scalable computationally-efficient incentive-compatible auction for m items and n ≥ 2 bidders cannot approximate the social welfare to within a factor better than 2. This is in contrast to arbitrarily good approximations that can be reached under computational constraints alone, and in contrast to the existence of incentive-compatible mechanisms that achieve the optimal allocation.",
"We consider algorithmic problems in a distributed setting where the participants cannot be assumed to follow the algorithm but rather their own self-interest. As such participants, termed agents, are capable of manipulating the algorithm, the algorithm designer should ensure in advance that the agents' interests are best served by behaving correctly. Following notions from the field of mechanism design, we suggest a framework for studying such algorithms. Our main technical contribution concerns the study of a representative task scheduling problem for which the standard mechanism design tools do not suffice. Journal of Economic Literature Classification Numbers: C60, C72, D61, D70, D80.",
"This paper deals with the design of efficiently computable incentive compatible, or truthful, mechanisms for combinatorial optimization problems with multi-parameter agents. We focus on approximation algorithms for NP-hard mechanism design problems. These algorithms need to satisfy certain monotonicity properties to ensure truthfulness. Since most of the known approximation techniques do not fulfill these properties, we study alternative techniques.Our first contribution is a quite general method to transform a pseudopolynomial algorithm into a monotone FPTAS. This can be applied to various problems like, e.g., knapsack, constrained shortest path, or job scheduling with deadlines. For example, the monotone FPTAS for the knapsack problem gives a very efficient, truthful mechanism for single-minded multi-unit auctions. The best previous result for such auctions was a 2-approximation. In addition, we present a monotone PTAS for the generalized assignment problem with any bounded number of parameters per agent.The most efficient way to solve packing integer programs (PIPs) is LP-based randomized rounding, which also is in general not monotone. We show that primal-dual greedy algorithms achieve almost the same approximation ratios for PIPs as randomized rounding. The advantage is that these algorithms are inherently monotone. This way, we can significantly improve the approximation ratios of truthful mechanisms for various fundamental mechanism design problems like single-minded combinatorial auctions (CAs), unsplittable flow routing and multicast routing. Our approximation algorithms can also be used for the winner determination in CAs with general bidders specifying their bids through an oracle.",
"",
"We give a general technique to obtain approximation mechanisms that are truthful in expectation. We show that for packing domains, any spl alpha -approximation algorithm that also bounds the integrality gap of the IF relaxation of the problem by a can be used to construct an spl alpha -approximation mechanism that is truthful in expectation. This immediately yields a variety of new and significantly improved results for various problem domains and furthermore, yields truthful (in expectation) mechanisms with guarantees that match the best known approximation guarantees when truthfulness is not required. In particular, we obtain the first truthful mechanisms with approximation guarantees for a variety of multi-parameter domains. We obtain truthful (in expectation) mechanisms achieving approximation guarantees of O( spl radic m) for combinatorial auctions (CAs), (1 + spl epsi ) for multiunit CAs with B = spl Omega (log m) copies of each item, and 2 for multiparameter knapsack problems (multiunit auctions). Our construction is based on considering an LP relaxation of the problem and using the classic VCG mechanism by W. Vickrey (1961), E. Clarke (1971) and T. Groves (1973) to obtain a truthful mechanism in this fractional domain. We argue that the (fractional) optimal solution scaled down by a, where a is the integrality gap of the problem, can be represented as a convex combination of integer solutions, and by viewing this convex combination as specifying a probability distribution over integer solutions, we get a randomized, truthful in expectation mechanism. Our construction can be seen as a way of exploiting VCG in a computational tractable way even when the underlying social-welfare maximization problem is NP-hard."
]
}
|
1107.2432
|
2953320191
|
We introduce the Funding Game, in which @math identical resources are to be allocated among @math selfish agents. Each agent requests a number of resources @math and reports a valuation @math , which verifiably lower -bounds @math 's true value for receiving @math items. The pairs @math can be thought of as size-value pairs defining a knapsack problem with capacity @math . A publicly-known algorithm is used to solve this knapsack problem, deciding which requests to satisfy in order to maximize the social welfare. We show that a simple mechanism based on the knapsack highest ratio greedy algorithm provides a Bayesian Price of Anarchy of 2, and for the complete information version of the game we give an algorithm that computes a Nash equilibrium strategy profile in @math time. Our primary algorithmic result shows that an extension of the mechanism to @math rounds has a Price of Anarchy of @math , yielding a graceful tradeoff between communication complexity and the social welfare.
|
More recently Procaccia and Tennenholtz ( @cite_22 ), initiated the study of strategy proof mechanisms without money, which was followed by the adaptation of many previously studied mechanism design problems to the non-monetary setting ( @cite_10 , @cite_9 , @cite_11 , @cite_5 , @cite_20 , @cite_3 ). The multi item allocation problem has also been studied in the setting where dishonest agents only partially reveal their valuation functions. The main question in this setting concerns the extent to which limiting communication complexity affects mechanism efficiency. In @cite_14 @cite_17 , for example, bid sizes in a single-item auction are restricted to real numbers expressed by @math bits. In @cite_16 , agent valuation functions are only partially revealed because full revelation would require exponential space in the number of items.
|
{
"cite_N": [
"@cite_14",
"@cite_22",
"@cite_9",
"@cite_17",
"@cite_3",
"@cite_5",
"@cite_16",
"@cite_10",
"@cite_20",
"@cite_11"
],
"mid": [
"2147513181",
"2108957189",
"",
"2099348100",
"2154843747",
"1986761989",
"1972357512",
"2093375346",
"49938292",
"1521104861"
],
"abstract": [
"We study auctions with severe bounds on the communication allowed: each bidder may only transmit t bits of information to the auctioneer. We consider both welfare-maximizing and revenue-maximizing auctions under this communication restriction. For both measures, we determine the optimal auction and show that the loss incurred relative to unconstrained auctions is mild. We prove unsurprising properties of these kinds of auctions, e.g. that discrete prices are informationally efficient, as well as some surprising properties, e.g. that asymmetric auctions are better than symmetric ones.",
"The literature on algorithmic mechanism design is mostly concerned with game-theoretic versions of optimization problems to which standard economic money-based mechanisms cannot be applied efficiently. Recent years have seen the design of various truthful approximation mechanisms that rely on payments. In this article, we advocate the reconsideration of highly structured optimization problems in the context of mechanism design. We explicitly argue for the first time that, in such domains, approximation can be leveraged to obtain truthfulness without resorting to payments. This stands in contrast to previous work where payments are almost ubiquitous and (more often than not) approximation is a necessary evil that is required to circumvent computational complexity. We present a case study in approximate mechanism design without money. In our basic setting, agents are located on the real line and the mechanism must select the location of a public facility; the cost of an agent is its distance to the facility. We establish tight upper and lower bounds for the approximation ratio given by strategyproof mechanisms without payments, with respect to both deterministic and randomized mechanisms, under two objective functions: the social cost and the maximum cost. We then extend our results in two natural directions: a domain where two facilities must be located and a domain where each agent controls multiple locations.",
"",
"We study auctions in which bidders have severe constraints on the size of messages they are allowed to send to the auctioneer. In such auctions, each bidder has a set of k possible bids (i.e. he can send up to t=log(k) bits to the mechanism). This paper studies the loss of economic efficiency and revenue in such mechanisms, compared with the case of unconstrained communication. For any number of players, we present auctions that incur an efficiency loss and a revenue loss of (O( 1 k^2 ) ), and we show that this upper bound is tight. When we allow the players to send their bits sequentially, we can construct even more efficient mechanisms, but only up to a factor of 2 in the amount of communication needed. We also show that when the players’ valuations for the item are not independently distributed, we cannot do much better than a trivial mechanism.",
"We consider the problem of locating facilities in a metric space to serve a set of selfish agents. The cost of an agent is the distance between her own location and the nearest facility. The social cost is the total cost of the agents. We are interested in designing strategy-proof mechanisms without payment that have a small approximation ratio for social cost. A mechanism is a (possibly randomized) algorithm which maps the locations reported by the agents to the locations of the facilities. A mechanism is strategy-proof if no agent can benefit from misreporting her location in any configuration. This setting was first studied by Procaccia and Tennenholtz [21]. They focused on the facility game where agents and facilities are located on the real line. studied the mechanisms for the facility games in a general metric space [1]. However, they focused on the games with only one facility. In this paper, we study the two-facility game in a general metric space, which extends both previous models. We first prove an Ω(n) lower bound of the social cost approximation ratio for deterministic strategy-proof mechanisms. Our lower bound even holds for the line metric space. This significantly improves the previous constant lower bounds [21, 17]. Notice that there is a matching linear upper bound in the line metric space [21]. Next, we provide the first randomized strategy-proof mechanism with a constant approximation ratio of 4. Our mechanism works in general metric spaces. For randomized strategy-proof mechanisms, the previous best upper bound is O(n) which works only in the line metric space.",
"We study the design of truthful mechanisms that do not use payments for the generalized assignment problem (GAP) and its variants. An instance of the GAP consists of a bipartite graph with jobs on one side and machines on the other. Machines have capacities and edges have values and sizes; the goal is to construct a welfare maximizing feasible assignment. In our model of private valuations, motivated by impossibility results, the value and sizes on all job-machine pairs are public information; however, whether an edge exists or not in the bipartite graph is a job's private information. That is, the selfish agents in our model are the jobs, and their private information is their edge set. We want to design mechanisms that are truthful without money (henceforth strategyproof), and produce assignments whose welfare is a good approximation to the optimal omniscient welfare. We study several variants of the GAP starting with matching. For the unweighted version, we give an optimal strategyproof mechanism. For maximum weight bipartite matching, we show that no strategyproof mechanism, deterministic or randomized, can be optimal, and present a 2-approximate strategyproof mechanism along with a matching lowerbound. Next we study knapsack-like problems, which, unlike matching, are NP-hard. For these problems, we develop a general LP-based technique that extends the ideas of Lavi and Swamy [14] to reduce designing a truthful approximate mechanism without money to designing such a mechanism for the fractional version of the problem. We design strategyproof approximate mechanisms for the fractional relaxations of multiple knapsack, size-invariant GAP, and value-invariant GAP, and use this technique to obtain, respectively, 2, 4 and 4-approximate strategyproof mechanisms for these problems. We then design an O(log n)-approximate strategyproof mechanism for the GAP by reducing, with logarithmic loss in the approximation, to our solution for the value-invariant GAP. Our technique may be of independent interest for designing truthful mechanisms without money for other LP-based problems.",
"Winner determination in combinatorial auctions has received significant interest in the AI Community in the last 3 years. Another difficult problem in combinatorial auctions is that of eliciting the bidders' preferences. We introduce a progressive, partial-revelation mechanism that determines an efficient allocation and the Vickrey payments. The mechanism is based on a family of algorithms that explore the natural lattice structure of the bidders' combined preferences. The mechanism elicits utilities in a natural sequence, and aims at keeping the amount of elicited information and the effort to compute the information minimal. We present analytical results on the amount of elicitation. We show that no value-querying algorithm that is constrained to querying feasible bundles can save more elicitation than one of our algorithms. We also show that one of our algorithms can determine the Vickrey payments as a costless by-product of determining an optimal allocation.",
"We consider the special case of approval voting when the set of agents and the set of alternatives coincide. This captures situations in which the members of an organization want to elect a president or a committee from their ranks, as well as a variety of problems in networked environments, for example in internet search, social networks like Twitter, or reputation systems like Epinions. More precisely, we look at a setting where each member of a set of n agents approves or disapproves of any other member of the set and we want to select a subset of k agents, for a given value of k, in a strategyproof and approximately efficient way. Here, strategyproofness means that no agent can improve its own chances of being selected by changing the set of other agents it approves. A mechanism is said to provide an approximation ratio of α for some α ≥ 1 if the ratio between the sum of approval scores of any set of size k and that of the set selected by the mechanism is always at most α. We show that for k ∈ 1, 2,..., n − 1 , no deterministic strategyproof mechanism can provide a finite approximation ratio. We then present a randomized strategyproof mechanism that provides an approximation ratio that is bounded from above by four for any value of k, and approaches one as k grows.",
"We investigate the problem of allocating items (private goods) among competing agents in a setting that is both prior-free and payment-free. Specificall, we focus on allocating multiple heterogeneous items between two agents with additive valuation functions. Our objective is to design strategy-proof mechanisms that are competitive against the most efficien (first-best allocation. We introduce the family of linear increasing-price (LIP) mechanisms. The LIP mechanisms are strategy-proof, prior-free, and payment-free, and they are exactly the increasing-price mechanisms satisfying a strong responsiveness property. We show how to solve for competitive mechanisms within the LIP family. For the case of two items, we fin a LIP mechanism whose competitive ratio is near optimal (the achieved competitive ratio is 0.828, while any strategy-proof mechanism is at most 0.841-competitive). As the number of items goes to infinit, we prove a negative result that any increasing-price mechanism (linear or nonlinear) has a maximal competitive ratio of 0.5. Our results imply that in some cases, it is possible to design good allocation mechanisms without payments and without priors.",
"Mechanism design without money has a rich history in social choice literature. Due to the strong impossibility theorem by Gibbard and Satterthwaite, exploring domains in which there exist dominant strategy mechanisms is one of the central questions in the field. We propose a general framework, called the generalized packing problem ( ), to study the mechanism design questions without payment. The possesses a rich structure and comprises a number of well-studied models as special cases, including, e.g., matroid, matching, knapsack, independent set, and the generalized assignment problem. We adopt the agenda of approximate mechanism design where the objective is to design a truthful (or strategyproof) mechanism without money that can be implemented in polynomial time and yields a good approximation to the socially optimal solution. We study several special cases of , and give constant approximation mechanisms for matroid, matching, knapsack, and the generalized assignment problem. Our result for generalized assignment problem solves an open problem proposed in DG10 . Our main technical contribution is in exploitation of the approaches from stable matching, which is a fundamental solution concept in the context of matching marketplaces, in application to mechanism design. Stable matching, while conceptually simple, provides a set of powerful tools to manage and analyze self-interested behaviors of participating agents. Our mechanism uses a stable matching algorithm as a critical component and adopts other approaches like random sampling and online mechanisms. Our work also enriches the stable matching theory with a new knapsack constrained matching model."
]
}
|
1107.1925
|
1975384370
|
In this paper we discuss the dissipative property of near-equilibrium classical solutions to the Cauchy problem of the Vlasov-Maxwell-Boltzmann System in the whole space @math when the positive charged ion flow provides a spatially uniform background. The most key point of studying this coupled degenerately dissipative system here is to establish the dissipation of the electromagnetic field which turns out to be of the regularity-loss type. Precisely, for the linearized non-homogeneous system, some @math energy functionals and @math time-frequency functionals which are equivalent with the naturally existing ones are designed to capture the optimal dissipation rate of the system, which in turn yields the optimal @math - @math type time-decay estimates of the corresponding linearized solution operator. These results show a special feature of the one-species Vlasov-Maxwell-Boltzmann system different from the case of two-species, that is, the dissipation of the magnetic field in one-species is strictly weaker than the one in two-species. As a by-product, the global existence of solutions to the nonlinear Cauchy problem is also proved by constructing some similar energy functionals but the time-decay rates of the obtained solution still remain open.
|
Recently, following a combination of @cite_13 and @cite_4 , the optimal large-time behavior of the two-species Vlasov-Maxwell-Boltzmann system was analyzed in @cite_6 . The main finding in @cite_6 is that although the non-homogeneous Maxwell system conserves the energy of the electromagnetic field, the coupling of the Boltzmann equation with the Maxwell system can generate some weak dissipation of the electromagnetic field which is actually of the regularity-loss type. It should be pointed out that even though the form of two-species Vlasov-Maxwell-Boltzmann system looks more complicated than that of the case of one-species, the study of global existence and time-decay rate is much more delicate in the case of one-species because the coupling term in the source of the Maxwell system corresponds to the momentum component of the macroscopic part of the solution which is degenerate with respect to the linearized operator @math . Essentially, it is this kind of the macroscopic coupling feature that leads to some different dissipation properties between two-species and one-species for the Vlasov-Maxwell-Boltzmann system.
|
{
"cite_N": [
"@cite_13",
"@cite_6",
"@cite_4"
],
"mid": [
"",
"2046869394",
"2087931324"
],
"abstract": [
"",
"In this paper we study the large-time behavior of classical solutions to the two-species Vlasov-Maxwell-Boltzmann system in the whole space amssym @math . The existence of global-in-time nearby Maxwellian solutions is known from Strain in 2006. However, the asymptotic behavior of these solutions has been a challenging open problem. Building on our previous work on time decay for the simpler Vlasov-Poisson-Boltzmann system, we prove that these solutions converge to the global Maxwellian with the optimal decay rate of O(t−3 2 + 3 (2r)) in the L (L)-norm for any 2 ≤ r ≤ ∞ if initial perturbation is smooth enough and decays in space velocity fast enough at infinity. Moreover, some explicit rates for the electromagnetic field tending to 0 are also provided. © 2011 Wiley Periodicals, Inc.",
"The Vlasov–Maxwell–Boltzmann system is one of the most fundamental models to describe the dynamics of dilute charged particles, where particles interact via collisions and through their self-consistent electromagnetic field. We prove existence of global in time classical solutions to the Cauchy problem near Maxwellians."
]
}
|
1107.1925
|
1975384370
|
In this paper we discuss the dissipative property of near-equilibrium classical solutions to the Cauchy problem of the Vlasov-Maxwell-Boltzmann System in the whole space @math when the positive charged ion flow provides a spatially uniform background. The most key point of studying this coupled degenerately dissipative system here is to establish the dissipation of the electromagnetic field which turns out to be of the regularity-loss type. Precisely, for the linearized non-homogeneous system, some @math energy functionals and @math time-frequency functionals which are equivalent with the naturally existing ones are designed to capture the optimal dissipation rate of the system, which in turn yields the optimal @math - @math type time-decay estimates of the corresponding linearized solution operator. These results show a special feature of the one-species Vlasov-Maxwell-Boltzmann system different from the case of two-species, that is, the dissipation of the magnetic field in one-species is strictly weaker than the one in two-species. As a by-product, the global existence of solutions to the nonlinear Cauchy problem is also proved by constructing some similar energy functionals but the time-decay rates of the obtained solution still remain open.
|
@math C(1+t)^ - 3 4 |u_0 |_ Z_1 L^2 @math | u |_ L^2_ ^2+ | a |^2 |k|^2 @math @math C(1+t)^ - 1 4 |u_0 |_ Z_1 L^2 @math | u |_ L^2_ ^2+|[ E , B ]|^2 @math @math @math | u |_ L^2_ ^2+|[ E , B ]|^2 @math @math @math b(t,x) @math u @math b(t,x) @math _1 @math _2 @math _4 @math E @math ^ lin ,2 (U(t)) @math (k) @math L^p @math L^q @math E u @math E ( u)^2 ,dxd @math (U(t))^ 1 2 (U(t)) @math E @math u @math (U(t)) @math @math $ We remark that this also has been observed in @cite_0 in the study of the one-species Vlasov-Poisson-Boltzmann system.
|
{
"cite_N": [
"@cite_0"
],
"mid": [
"1991114201"
],
"abstract": [
"In this paper, we are concerned with the one-species Vlasov–Poisson–Boltzmann system with a nonconstant background density in full space. There exists a stationary solution when the background density is a small perturbation of a positive constant state. We prove the nonlinear stability of solutions to the Cauchy problem near the stationary state in some Sobolev space without any time derivatives. This result is nontrivial even when the background density is a constant state. In the proof, the macroscopic balance laws are essentially used to deal with the a priori estimates on both the microscopic and macroscopic parts of the solution. Moreover, some interactive energy functionals are introduced to overcome difficulty stemming from the absence of time derivatives in the energy functional."
]
}
|
1107.2462
|
2950083673
|
Machine learning approaches to multi-label document classification have to date largely relied on discriminative modeling techniques such as support vector machines. A drawback of these approaches is that performance rapidly drops off as the total number of labels and the number of labels per document increase. This problem is amplified when the label frequencies exhibit the type of highly skewed distributions that are often observed in real-world datasets. In this paper we investigate a class of generative statistical topic models for multi-label documents that associate individual word tokens with different labels. We investigate the advantages of this approach relative to discriminative models, particularly with respect to classification problems involving large numbers of relatively rare labels. We compare the performance of generative and discriminative approaches on document labeling tasks ranging from datasets with several thousand labels to datasets with tens of labels. The experimental results indicate that probabilistic generative models can achieve competitive multi-label classification performance compared to discriminative methods, and have advantages for datasets with many labels and skewed label frequencies.
|
A more recent approach proposed by ---Labeled-LDA (L-LDA)---was designed specifically for multi-label settings. In L-LDA, the training of the LDA model is adapted to account for multi-labeled corpora by putting topics'' in 1-1 correspondence with labels and then restricting the sampling of topics for each document to the set of labels that were assigned to the document, in a manner similar to the Author-Model described by (where the set of authors for each document in the Author Model is now replaced by the set of labels in L-LDA). The primary focus of @cite_2 was to illustrate that L-LDA has certain qualitative advantages over discriminative methods (e.g., the ability to label individual words, as well as providing interpretable snippets for document summarization). Their classification results indicate that under certain conditions LDA-based models may be able to achieve competitive performance with discriminative approaches such as SVMs.
|
{
"cite_N": [
"@cite_2"
],
"mid": [
"1969486090"
],
"abstract": [
"A significant portion of the world's text is tagged by readers on social bookmarking websites. Credit attribution is an inherent problem in these corpora because most pages have multiple tags, but the tags do not always apply with equal specificity across the whole document. Solving the credit attribution problem requires associating each word in a document with the most appropriate tags and vice versa. This paper introduces Labeled LDA, a topic model that constrains Latent Dirichlet Allocation by defining a one-to-one correspondence between LDA's latent topics and user tags. This allows Labeled LDA to directly learn word-tag correspondences. We demonstrate Labeled LDA's improved expressiveness over traditional LDA with visualizations of a corpus of tagged web pages from del.icio.us. Labeled LDA outperforms SVMs by more than 3 to 1 when extracting tag-specific document snippets. As a multi-label text classifier, our model is competitive with a discriminative baseline on a variety of datasets."
]
}
|
1107.2462
|
2950083673
|
Machine learning approaches to multi-label document classification have to date largely relied on discriminative modeling techniques such as support vector machines. A drawback of these approaches is that performance rapidly drops off as the total number of labels and the number of labels per document increase. This problem is amplified when the label frequencies exhibit the type of highly skewed distributions that are often observed in real-world datasets. In this paper we investigate a class of generative statistical topic models for multi-label documents that associate individual word tokens with different labels. We investigate the advantages of this approach relative to discriminative models, particularly with respect to classification problems involving large numbers of relatively rare labels. We compare the performance of generative and discriminative approaches on document labeling tasks ranging from datasets with several thousand labels to datasets with tens of labels. The experimental results indicate that probabilistic generative models can achieve competitive multi-label classification performance compared to discriminative methods, and have advantages for datasets with many labels and skewed label frequencies.
|
Our work differs from that of @cite_2 in two significant aspects. Firstly, we propose a more flexible set of LDA models for multi-label classification---including one model that takes into account prior label frequencies, and one that can additionally account for label dependencies---which lead to significant improvements in classification performance. The L-LDA model can be viewed as a special case of these models. Secondly, we conduct a much larger range and more systematic set of experiments, including in particular datasets with large numbers of labels with skewed frequency-distributions, and show that generative models do particularly well in this regime compared to discriminative methods. In contrast, @cite_2 compared their L-LDA approach with discriminative models only on relatively small datasets (primarily on the Yahoo! sub-directory datasets discussed in the introduction).
|
{
"cite_N": [
"@cite_2"
],
"mid": [
"1969486090"
],
"abstract": [
"A significant portion of the world's text is tagged by readers on social bookmarking websites. Credit attribution is an inherent problem in these corpora because most pages have multiple tags, but the tags do not always apply with equal specificity across the whole document. Solving the credit attribution problem requires associating each word in a document with the most appropriate tags and vice versa. This paper introduces Labeled LDA, a topic model that constrains Latent Dirichlet Allocation by defining a one-to-one correspondence between LDA's latent topics and user tags. This allows Labeled LDA to directly learn word-tag correspondences. We demonstrate Labeled LDA's improved expressiveness over traditional LDA with visualizations of a corpus of tagged web pages from del.icio.us. Labeled LDA outperforms SVMs by more than 3 to 1 when extracting tag-specific document snippets. As a multi-label text classifier, our model is competitive with a discriminative baseline on a variety of datasets."
]
}
|
1107.2462
|
2950083673
|
Machine learning approaches to multi-label document classification have to date largely relied on discriminative modeling techniques such as support vector machines. A drawback of these approaches is that performance rapidly drops off as the total number of labels and the number of labels per document increase. This problem is amplified when the label frequencies exhibit the type of highly skewed distributions that are often observed in real-world datasets. In this paper we investigate a class of generative statistical topic models for multi-label documents that associate individual word tokens with different labels. We investigate the advantages of this approach relative to discriminative models, particularly with respect to classification problems involving large numbers of relatively rare labels. We compare the performance of generative and discriminative approaches on document labeling tasks ranging from datasets with several thousand labels to datasets with tens of labels. The experimental results indicate that probabilistic generative models can achieve competitive multi-label classification performance compared to discriminative methods, and have advantages for datasets with many labels and skewed label frequencies.
|
More recently @cite_7 demonstrated that the probabilistic framework of conditional random fields showed promise for multilabel classification, compared to discriminative classifiers, as the number of labels within test documents increased. In follow-up work on these models, @cite_6 illustrated that this approach has the further benefit of being able to naturally incorporate unlabeled data for semi-supervised learning. A drawback of the CRF approach is scalability, particularly when accounting for label dependencies. Exact inference is tractable only for about 3-12 [labels]'' . Alternatives to exact inference considered in include a supported inference" method which learns only to classify the label combinations that occur in the training set, and a binary-pruning method that employs an intelligent pruning method which ignores dependencies between all but the most commonly observed pairs of labels. Although this method may improve upon approaches that ignore dependencies when restricted to datasets with few labels and many examples (such as traditional benchmark datasets), it seems unlikely that any such methods will be able to properly account for dependencies in datasets with power-law frequency statistics (since nearly all dependencies in these datasets are between labels which have very sparse training data).
|
{
"cite_N": [
"@cite_6",
"@cite_7"
],
"mid": [
"2101101940",
"1976526581"
],
"abstract": [
"Supervised topic models utilize document's side information for discovering predictive low dimensional representations of documents; and existing models apply likelihood-based estimation. In this paper, we present a max-margin supervised topic model for both continuous and categorical response variables. Our approach, the maximum entropy discrimination latent Dirichlet allocation (MedLDA), utilizes the max-margin principle to train supervised topic models and estimate predictive topic representations that are arguably more suitable for prediction. We develop efficient variational methods for posterior inference and demonstrate qualitatively and quantitatively the advantages of MedLDA over likelihood-based topic models on movie review and 20 Newsgroups data sets.",
"Receiver Operator Characteristic (ROC) curves are commonly used to present results for binary decision problems in machine learning. However, when dealing with highly skewed datasets, Precision-Recall (PR) curves give a more informative picture of an algorithm's performance. We show that a deep connection exists between ROC space and PR space, such that a curve dominates in ROC space if and only if it dominates in PR space. A corollary is the notion of an achievable PR curve, which has properties much like the convex hull in ROC space; we show an efficient algorithm for computing this curve. Finally, we also note differences in the two types of curves are significant for algorithm design. For example, in PR space it is incorrect to linearly interpolate between points. Furthermore, algorithms that optimize the area under the ROC curve are not guaranteed to optimize the area under the PR curve."
]
}
|
1107.1744
|
2953015416
|
This paper addresses the problem of minimizing a convex, Lipschitz function @math over a convex, compact set @math under a stochastic bandit feedback model. In this model, the algorithm is allowed to observe noisy realizations of the function value @math at any query point @math . The quantity of interest is the regret of the algorithm, which is the sum of the function values at algorithm's query points minus the optimal function value. We demonstrate a generalization of the ellipsoid algorithm that incurs @math regret. Since any algorithm has regret at least @math on this problem, our algorithm is optimal in terms of the scaling with @math .
|
The case of convex, Lipschitz cost functions has been looked at in the harder adversarial model @cite_9 @cite_14 by constructing one-point gradient estimators. However, the best-known regret bounds for these algorithms are @math . @cite_1 show a regret bound of @math in the adversarial setup, when two evaluations of the same function are allowed, instead of just one. However, this does not include the stochastic bandit optimization setting since each function evaluation in the stochastic case is corrupted with independent noise, violating the critical requirement of a bounded gradient estimator that their algorithm exploits. Indeed, applying their result in our setup yields a regret bound of @math .
|
{
"cite_N": [
"@cite_9",
"@cite_14",
"@cite_1"
],
"mid": [
"2952840318",
"2097487180",
"153281708"
],
"abstract": [
"We consider a the general online convex optimization framework introduced by Zinkevich. In this setting, there is a sequence of convex functions. Each period, we must choose a signle point (from some feasible set) and pay a cost equal to the value of the next function on our chosen point. Zinkevich shows that, if the each function is revealed after the choice is made, then one can achieve vanishingly small regret relative the best single decision chosen in hindsight. We extend this to the bandit setting where we do not find out the entire functions but rather just their value at our chosen point. We show how to get vanishingly small regret in this setting. Our approach uses a simple approximation of the gradient that is computed from evaluating a function at a single (random) point. We show that this estimate is sufficient to mimic Zinkevich's gradient descent online analysis, with access to the gradient (only being able to evaluate the function at a single point).",
"In the multi-armed bandit problem, an online algorithm must choose from a set of strategies in a sequence of n trials so as to minimize the total cost of the chosen strategies. While nearly tight upper and lower bounds are known in the case when the strategy set is finite, much less is known when there is an infinite strategy set. Here we consider the case when the set of strategies is a subset of ℝd, and the cost functions are continuous. In the d = 1 case, we improve on the best-known upper and lower bounds, closing the gap to a sublogarithmic factor. We also consider the case where d > 1 and the cost functions are convex, adapting a recent online convex optimization algorithm of Zinkevich to the sparser feedback model of the multi-armed bandit problem.",
"Bandit convex optimization is a special case of online convex optimization with partial information. In this setting, a player attempts to minimize a sequence of adversarially generated convex loss functions, while only observing the value of each function at a single point. In some cases, the minimax regret of these problems is known to be strictly worse than the minimax regret in the corresponding full information setting. We introduce the multi-point bandit setting, in which the player can query each loss function at multiple points. When the player is allowed to query each function at two points, we prove regret bounds that closely resemble bounds for the full information case. This suggests that knowing the value of each loss function at two points is almost as useful as knowing the value of each function everywhere. When the player is allowed to query each function at d+1 points (d being the dimension of the space), we prove regret bounds that are exactly equivalent to full information bounds for smooth functions."
]
}
|
1107.1744
|
2953015416
|
This paper addresses the problem of minimizing a convex, Lipschitz function @math over a convex, compact set @math under a stochastic bandit feedback model. In this model, the algorithm is allowed to observe noisy realizations of the function value @math at any query point @math . The quantity of interest is the regret of the algorithm, which is the sum of the function values at algorithm's query points minus the optimal function value. We demonstrate a generalization of the ellipsoid algorithm that incurs @math regret. Since any algorithm has regret at least @math on this problem, our algorithm is optimal in terms of the scaling with @math .
|
A related line of work attempts to solve convex optimization problems by instead posing the problem of finding a feasible point from a convex set. Different oracle models of specifying the convex set correspond to different optimization settings. The bandit setting is identical to finding a feasible point, given only a membership oracle for the convex set. Since we get only noisy function evaluations, we in fact only have access to a noisy membership oracle. While there are elegant solutions based on random walks in the easier separation oracle model @cite_8 , the membership oracle setting has been mostly studied in the noiseless setting only and uses much more complex techniques building on the seminal work of Nemirovski and Yudin @cite_10 . The techniques have the additional drawback that they do not guarantee a low regret since the methods often explore aggressively.
|
{
"cite_N": [
"@cite_10",
"@cite_8"
],
"mid": [
"2010189695",
"2106318612"
],
"abstract": [
"In this paper we consider the multiarmed bandit problem where the arms are chosen from a subset of the real line and the mean rewards are assumed to be a continuous function of the arms. The problem with an infinite number of arms is much more difficult than the usual one with a finite number of arms because the built-in learning task is now infinite dimensional. We devise a kernel estimator-based learning scheme for the mean reward as a function of the arms. Using this learning scheme, we construct a class of certainty equivalence control with forcing schemes and derive asymptotic upper bounds on their learning loss. To the best of our knowledge, these bounds are the strongest rates yet available. Moreover, they are stronger than the @math required for optimality with respect to the average-cost-per-unit-time criterion.",
"Minimizing a convex function over a convex set in n-dimensional space is a basic, general problem with many interesting special cases. Here, we present a simple new algorithm for convex optimization based on sampling by a random walk. It extends naturally to minimizing quasi-convex functions and to other generalizations."
]
}
|
1107.1744
|
2953015416
|
This paper addresses the problem of minimizing a convex, Lipschitz function @math over a convex, compact set @math under a stochastic bandit feedback model. In this model, the algorithm is allowed to observe noisy realizations of the function value @math at any query point @math . The quantity of interest is the regret of the algorithm, which is the sum of the function values at algorithm's query points minus the optimal function value. We demonstrate a generalization of the ellipsoid algorithm that incurs @math regret. Since any algorithm has regret at least @math on this problem, our algorithm is optimal in terms of the scaling with @math .
|
Since @math is convex by assumption, the average @math must satisfy @math (by Jensen's inequality). That is, a method guaranteeing small regret is also an optimization algorithm. The converse, however, is not necessarily true. Suppose an optimization algorithm queries @math points of the domain and then outputs a candidate minimizer @math . Without any assumption on the behavior of the optimization method nothing can be said about the regret it suffers over @math iterations. In fact, depending on the particular setup, an optimization method might prefer to spend time querying far from the minimum of the function (that is, ) and then output the solution at the last step. Guaranteeing a small regret typically involves a more careful balancing of and . This distinction between arbitrary optimization schemes and anytime methods is discussed further in the paper @cite_6 .
|
{
"cite_N": [
"@cite_6"
],
"mid": [
"2155192981"
],
"abstract": [
"We study the intrinsic limitations of sequential convex optimization through the lens of feedback information theory. In the oracle model of optimization, an algorithm queries an oracle for noisy information about the unknown objective function and the goal is to (approximately) minimize every function in a given class using as few queries as possible. We show that, in order for a function to be optimized, the algorithm must be able to accumulate enough information about the objective. This, in turn, puts limits on the speed of optimization under specific assumptions on the oracle and the type of feedback. Our techniques are akin to the ones used in statistical literature to obtain minimax lower bounds on the risks of estimation procedures; the notable difference is that, unlike in the case of i.i.d. data, a sequential optimization algorithm can gather observations in a controlled manner, so that the amount of information at each step is allowed to change in time. In particular, we show that optimization algorithms often obey the law of diminishing returns: the signal-to-noise ratio drops as the optimization algorithm approaches the optimum. To underscore the generality of the tools, we use our approach to derive fundamental lower bounds for a certain active learning problem. Overall, the present work connects the intuitive notions of “information” in optimization, experimental design, estimation, and active learning to the quantitative notion of Shannon information."
]
}
|
1107.1744
|
2953015416
|
This paper addresses the problem of minimizing a convex, Lipschitz function @math over a convex, compact set @math under a stochastic bandit feedback model. In this model, the algorithm is allowed to observe noisy realizations of the function value @math at any query point @math . The quantity of interest is the regret of the algorithm, which is the sum of the function values at algorithm's query points minus the optimal function value. We demonstrate a generalization of the ellipsoid algorithm that incurs @math regret. Since any algorithm has regret at least @math on this problem, our algorithm is optimal in terms of the scaling with @math .
|
We note that most of the existing approaches to derivative-free optimization outlined in the recent book @cite_3 typically search for a descent or sufficient descent direction and then take a step in this direction. However, most convergence results are asymptotic and do not provide concrete rates even in an optimization error setting. The main emphasis is often on global optimization of non-convex functions, while we are mainly interested in convex functions in this work. Nesterov @cite_13 recently analyzes schemes similar to that of @cite_1 with access to function evaluations, showing @math convergence for non-smooth functions and accelerated schemes for smooth mean cost functions. However, when analyzed in a noisy evaluation setting, his rates suffer from the degradation as those of @cite_1 .
|
{
"cite_N": [
"@cite_1",
"@cite_13",
"@cite_3"
],
"mid": [
"153281708",
"2149479912",
"181733065"
],
"abstract": [
"Bandit convex optimization is a special case of online convex optimization with partial information. In this setting, a player attempts to minimize a sequence of adversarially generated convex loss functions, while only observing the value of each function at a single point. In some cases, the minimax regret of these problems is known to be strictly worse than the minimax regret in the corresponding full information setting. We introduce the multi-point bandit setting, in which the player can query each loss function at multiple points. When the player is allowed to query each function at two points, we prove regret bounds that closely resemble bounds for the full information case. This suggests that knowing the value of each loss function at two points is almost as useful as knowing the value of each function everywhere. When the player is allowed to query each function at d+1 points (d being the dimension of the space), we prove regret bounds that are exactly equivalent to full information bounds for smooth functions.",
"In this paper, we prove new complexity bounds for methods of convex optimization based only on computation of the function value. The search directions of our schemes are normally distributed random Gaussian vectors. It appears that such methods usually need at most n times more iterations than the standard gradient methods, where n is the dimension of the space of variables. This conclusion is true for both nonsmooth and smooth problems. For the latter class, we present also an accelerated scheme with the expected rate of convergence @math O(n2k2), where k is the iteration counter. For stochastic optimization, we propose a zero-order scheme and justify its expected rate of convergence @math O(nk1 2). We give also some bounds for the rate of convergence of the random gradient-free methods to stationary points of nonconvex functions, for both smooth and nonsmooth cases. Our theoretical results are supported by preliminary computational experiments.",
"The absence of derivatives, often combined with the presence of noise or lack of smoothness, is a major challenge for optimization. This book explains how sampling and model techniques are used in derivative-free methods and how these methods are designed to efficiently and rigorously solve optimization problems. Although readily accessible to readers with a modest background in computational mathematics, it is also intended to be of interest to researchers in the field. Introduction to Derivative-Free Optimization is the first contemporary comprehensive treatment of optimization without derivatives. This book covers most of the relevant classes of algorithms from direct search to model-based approaches. It contains a comprehensive description of the sampling and modeling tools needed for derivative-free optimization; these tools allow the reader to better understand the convergent properties of the algorithms and identify their differences and similarities. Introduction to Derivative-Free Optimization also contains analysis of convergence for modified Nelder Mead and implicit-filtering methods, as well as for model-based methods such as wedge methods and methods based on minimum-norm Frobenius models. Audience: The book is intended for anyone interested in using optimization on problems where derivatives are difficult or impossible to obtain. Such audiences include chemical, mechanical, aeronautical, and electrical engineers, as well as economists, statisticians, operations researchers, management scientists, biological and medical researchers, and computer scientists. It is also appropriate for use in an advanced undergraduate or early graduate-level course on optimization for students having a background in calculus, linear algebra, and numerical analysis. Contents: Preface; Chapter 1: Introduction; Part I: Sampling and modeling; Chapter 2: Sampling and linear models; Chapter 3: Interpolating nonlinear models; Chapter 4: Regression nonlinear models; Chapter 5: Underdetermined interpolating models; Chapter 6: Ensuring well poisedness and suitable derivative-free models; Part II: Frameworks and algorithms; Chapter 7: Directional direct-search methods; Chapter 8: Simplicial direct-search methods; Chapter 9: Line-search methods based on simplex derivatives; Chapter 10: Trust-region methods based on derivative-free models; Chapter 11: Trust-region interpolation-based methods; Part III: Review of other topics; Chapter 12: Review of surrogate model management; Chapter 13: Review of constrained and other extensions to derivative-free optimization; Appendix: Software for derivative-free optimization; Bibliography; Index."
]
}
|
1107.1660
|
2950605453
|
Traditionally the probabilistic ranking principle is used to rank the search results while the ranking based on expected profits is used for paid placement of ads. These rankings try to maximize the expected utilities based on the user click models. Recent empirical analysis on search engine logs suggests a unified click models for both ranked ads and search results. The segregated view of document and ad rankings does not consider this commonality. Further, the used models consider parameters of (i) probability of the user abandoning browsing results (ii) perceived relevance of result snippets. But how to consider them for improved ranking is unknown currently. In this paper, we propose a generalized ranking function---namely "Click Efficiency (CE)"---for documents and ads based on empirically proven user click models. The ranking considers parameters (i) and (ii) above, optimal and has the same time complexity as sorting. To exploit its generality, we examine the reduced forms of CE ranking under different assumptions enumerating a hierarchy of ranking functions. Some of the rankings in the hierarchy are currently used ad and document ranking functions; while others suggest new rankings. While optimality of ranking is sufficient for document ranking, applying CE ranking to ad auctions requires an appropriate pricing mechanism. We incorporate a second price based pricing mechanism with the proposed ranking. Our analysis proves several desirable properties including revenue dominance over VCG for the same bid vector and existence of a Nash Equilibrium in pure strategies. The equilibrium is socially optimal, and revenue equivalent to the truthful VCG equilibrium. Further, we relax the independence assumption in CE ranking and analyze the diversity ranking problem. We show that optimal diversity ranking is NP-Hard in general, and that a constant time approximation is unlikely.
|
User behavior studies in click models validate the ranking function introduced. There are a number of position based and cascade models studied recently @cite_13 @cite_5 @cite_10 @cite_9 @cite_29 . In particular, General Click Model (GCM) by Zhu @cite_29 is interesting for us, since other click models are special cases of GCM. Zhu @cite_29 have listed assumptions under which the GCM would reduce to other click models. We will discuss the relations of our model to GCM below. Optimizing utilities of two dimensional placement of search results has been studied by Chierichetti @cite_22
|
{
"cite_N": [
"@cite_22",
"@cite_10",
"@cite_9",
"@cite_29",
"@cite_5",
"@cite_13"
],
"mid": [
"2021905495",
"2106630408",
"2099213975",
"2092701055",
"1992549066",
"2026784708"
],
"abstract": [
"Classic search engine results are presented as an ordered list of documents and the problem of presentation trivially reduces to ordering documents by their scores. This is because users scan a list presentation from top to bottom. This leads to natural list optimization measures such as the discounted cumulative gain (DCG) and the rank-biased precision (RBP). Increasingly, search engines are using two-dimensional results presentations; image and shopping search results are long-standing examples. The simplistic heuristic used in practice is to place images by row-major order in the matrix presentation. However, a variety of evidence suggests that users' scan of pages is not in this matrix order. In this paper we (1) view users' scan of a results page as a Markov chain, which yields DCG and RBP as special cases for linear lists; (2) formulate, study, and develop solutions for the problem of inferring the Markov chain from click logs; (3) from these inferred Markov chains, empirically validate folklore phenomena (e.g., the \"golden triangle\" of user scans in two dimensions); and (4) develop and experimentally compare algorithms for optimizing user utility in matrix presentations. The theory and algorithms extend naturally beyond matrix presentations.",
"Given a terabyte click log, can we build an efficient and effective click model? It is commonly believed that web search click logs are a gold mine for search business, because they reflect users' preference over web documents presented by the search engine. Click models provide a principled approach to inferring user-perceived relevance of web documents, which can be leveraged in numerous applications in search businesses. Due to the huge volume of click data, scalability is a must. We present the click chain model (CCM), which is based on a solid, Bayesian framework. It is both scalable and incremental, perfectly meeting the computational challenges imposed by the voluminous click logs that constantly grow. We conduct an extensive experimental study on a data set containing 8.8 million query sessions obtained in July 2008 from a commercial search engine. CCM consistently outperforms two state-of-the-art competitors in a number of metrics, with over 9.7 better log-likelihood, over 6.2 better click perplexity and much more robust (up to 30 ) prediction of the first and the last clicked position.",
"As with any application of machine learning, web search ranking requires labeled data. The labels usually come in the form of relevance assessments made by editors. Click logs can also provide an important source of implicit feedback and can be used as a cheap proxy for editorial labels. The main difficulty however comes from the so called position bias - urls appearing in lower positions are less likely to be clicked even if they are relevant. In this paper, we propose a Dynamic Bayesian Network which aims at providing us with unbiased estimation of the relevance from the click logs. Experiments show that the proposed click model outperforms other existing click models in predicting both click-through rate and relevance.",
"Recent advances in click model have positioned it as an attractive method for representing user preferences in web search and online advertising. Yet, most of the existing works focus on training the click model for individual queries, and cannot accurately model the tail queries due to the lack of training data. Simultaneously, most of the existing works consider the query, url and position, neglecting some other important attributes in click log data, such as the local time. Obviously, the click through rate is different between daytime and midnight. In this paper, we propose a novel click model based on Bayesian network, which is capable of modeling the tail queries because it builds the click model on attribute values, with those values being shared across queries. We called our work General Click Model (GCM) as we found that most of the existing works can be special cases of GCM by assigning different parameters. Experimental results on a large-scale commercial advertisement dataset show that GCM can significantly and consistently lead to better results as compared to the state-of-the-art works.",
"Search engine click logs provide an invaluable source of relevance information, but this information is biased. A key source of bias is presentation order: the probability of click is influenced by a document's position in the results page. This paper focuses on explaining that bias, modelling how probability of click depends on position. We propose four simple hypotheses about how position bias might arise. We carry out a large data-gathering effort, where we perturb the ranking of a major search engine, to see how clicks are affected. We then explore which of the four hypotheses best explains the real-world position effects, and compare these to a simple logistic regression model. The data are not well explained by simple position models, where some users click indiscriminately on rank 1 or there is a simple decay of attention over ranks. A 'cascade' model, where users view results from top to bottom and leave as soon as they see a worthwhile document, is our best explanation for position bias in early ranks",
"Search engine click logs provide an invaluable source of relevance information but this information is biased because we ignore which documents from the result list the users have actually seen before and after they clicked. Otherwise, we could estimate document relevance by simple counting. In this paper, we propose a set of assumptions on user browsing behavior that allows the estimation of the probability that a document is seen, thereby providing an unbiased estimate of document relevance. To train, test and compare our model to the best alternatives described in the Literature, we gather a large set of real data and proceed to an extensive cross-validation experiment. Our solution outperforms very significantly all previous models. As a side effect, we gain insight into the browsing behavior of users and we can compare it to the conclusions of an eye-tracking experiments by [12]. In particular, our findings confirm that a user almost always see the document directly after a clicked document. They also explain why documents situated just after a very relevant document are clicked more often."
]
}
|
1107.1660
|
2950605453
|
Traditionally the probabilistic ranking principle is used to rank the search results while the ranking based on expected profits is used for paid placement of ads. These rankings try to maximize the expected utilities based on the user click models. Recent empirical analysis on search engine logs suggests a unified click models for both ranked ads and search results. The segregated view of document and ad rankings does not consider this commonality. Further, the used models consider parameters of (i) probability of the user abandoning browsing results (ii) perceived relevance of result snippets. But how to consider them for improved ranking is unknown currently. In this paper, we propose a generalized ranking function---namely "Click Efficiency (CE)"---for documents and ads based on empirically proven user click models. The ranking considers parameters (i) and (ii) above, optimal and has the same time complexity as sorting. To exploit its generality, we examine the reduced forms of CE ranking under different assumptions enumerating a hierarchy of ranking functions. Some of the rankings in the hierarchy are currently used ad and document ranking functions; while others suggest new rankings. While optimality of ranking is sufficient for document ranking, applying CE ranking to ad auctions requires an appropriate pricing mechanism. We incorporate a second price based pricing mechanism with the proposed ranking. Our analysis proves several desirable properties including revenue dominance over VCG for the same bid vector and existence of a Nash Equilibrium in pure strategies. The equilibrium is socially optimal, and revenue equivalent to the truthful VCG equilibrium. Further, we relax the independence assumption in CE ranking and analyze the diversity ranking problem. We show that optimal diversity ranking is NP-Hard in general, and that a constant time approximation is unlikely.
|
Diversity ranking has received considerable attention recently @cite_15 @cite_21 . The objective functions used to measure diversity by prior works are known to be NP-Hard @cite_7 .
|
{
"cite_N": [
"@cite_15",
"@cite_21",
"@cite_7"
],
"mid": [
"",
"2107126505",
"2051968805"
],
"abstract": [
"",
"Result diversity is a topic of great importance as more facets of queries are discovered and users expect to find their desired facets in the first page of the results. However, the underlying questions of how 'diversity' interplays with 'quality' and when preference should be given to one or both are not well-understood. In this work, we model the problem as expectation maximization and study the challenges of estimating the model parameters and reaching an equilibrium. One model parameter, for example, is correlations between pages which we estimate using textual contents of pages and click data (when available). We conduct experiments on diversifying randomly selected queries from a query log and the queries chosen from the disambiguation topics of Wikipedia. Our algorithm improves upon Google in terms of the diversity of random queries, retrieving 14 to 38 more aspects of queries in top 5, while maintaining a precision very close to Google. On a more selective set of queries that are expected to benefit from diversification, our algorithm improves upon Google in terms of precision and diversity of the results, and significantly outperforms another baseline system for result diversification.",
"A useful ability for search engines is to be able to rank objects with novelty and diversity: the top k documents retrieved should cover possible intents of a query with some distribution, or should contain a diverse set of subtopics related to the user's information need, or contain nuggets of information with little redundancy. Evaluation measures have been introduced to measure the effectiveness of systems at this task, but these measures have worst-case NP-hard computation time. The primary consequence of this is that there is no ranking principle akin to the Probability Ranking Principle for document relevance that provides uniform instruction on how to rank documents for novelty and diversity. We use simulation to investigate the practical implications of this for optimization and evaluation of retrieval systems."
]
}
|
1107.1404
|
2953315821
|
We derive multiscale statistics for deconvolution in order to detect qualitative features of the unknown density. An important example covered within this framework is to test for local monotonicity on all scales simultaneously. We investigate the moderately ill-posed setting, where the Fourier transform of the error density in the deconvolution model is of polynomial decay. For multiscale testing, we consider a calibration, motivated by the modulus of continuity of Brownian motion. We investigate the performance of our results from both the theoretical and simulation based point of view. A major consequence of our work is that the detection of qualitative features of a density in a deconvolution problem is a doable task although the minimax rates for pointwise estimation are very slow.
|
Hypothesis testing for deconvolution and related inverse problems is a relatively new area. Current methods cover testing of parametric assumptions (cf. @cite_24 @cite_39 @cite_41 ) and, more recently, testing for certain smoothness classes such as Sobolev balls in a Gaussian sequence model (Laurent @cite_7 @cite_18 and Ingster @cite_32 ). All these papers focus on regression deconvolution models. Exceptions for density deconvolution are Holzmann @cite_42 , Balabdaoui @cite_23 , and Meister @cite_43 who developed tests for various global hypotheses, such as global monotonicity. The latter test has been derived for one fixed interval and allows to check whether a density is monotone on that interval at a preassigned level of significance.
|
{
"cite_N": [
"@cite_18",
"@cite_7",
"@cite_41",
"@cite_42",
"@cite_32",
"@cite_39",
"@cite_24",
"@cite_43",
"@cite_23"
],
"mid": [
"1963643305",
"2963653515",
"",
"2068523437",
"2096798436",
"",
"2018808115",
"2076148642",
"2007693060"
],
"abstract": [
"In this paper we study statistical inference for certain inverse problems. We go beyond mere estimation purposes and review and develop the construction of confidence intervals and confidence bands in some inverse problems, including deconvolution and the backward heat equation. Further, we discuss the construction of certain hypothesis tests, in particular concerning the number of local maxima of the unknown function. The methods are illustrated in a case study, where we analyze the distribution of heliocentric escape velocities of galaxies in the Centaurus galaxy cluster, and provide statistical evidence for its bimodality.",
"The aim of this paper is to establish non-asymptotic minimax rates of testing for goodness-of-fit hypotheses in a heteroscedastic setting. More precisely, we deal with sequences @math of independent Gaussian random variables, having mean @math and variance @math . The set @math will be either finite or countable. In particular, such a model covers the inverse problem setting where few results in test theory have been obtained. The rates of testing are obtained with respect to @math and @math norms, without assumption on @math and on several functions spaces. Our point of view is completely non-asymptotic.",
"",
"We study non-parametric tests for checking parametric hypotheses about a multivariate density f of independent identically distributed random vectors Z1, Z2,... which are observed under additional noise with density ψ. The tests we propose are an extension of the test due to Bickel and Rosenblatt [On some global measures of the deviations of density function estimates, Ann. Statist. 1 (1973) 1071-1095] and are based on a comparison of a nonparametric deconvolution estimator and the smoothed version of a parametric fit of the density f of the variables of interest Zi. In an example the loss of efficiency is highlighted when the test is based on the convolved (but observable) density g = f * ψ instead on the initial density of interest f.",
"We consider the detection problem of a two-dimensional function from noisy observations of its integrals over lines. We study both rate and sharp asymptotics for the error probabilities in the minimax setup. By construction, the derived tests are non-adaptive. We also construct a minimax rate-optimal adaptive test of rather simple structure.",
"",
"We propose two test statistics for use in inverse regression problems \"Y\"e\"K\"\"t\"p\"e\", where \"K\" is a given linear operator which cannot be continuously inverted. Thus, only noisy, indirect observations \"Y\" for the function \"t\" are available. Both test statistics have a counterpart in classical hypothesis testing, where they are called the order selection test and the data-driven Neyman smooth test. We also introduce two model selection criteria which extend the classical Akaike information criterion and Bayes information criterion to inverse regression problems. In a simulation study we show that the inverse order selection and Neyman smooth tests outperform their direct counterparts in many cases. The theory is motivated by data arising in confocal fluorescence microscopy. Here, images are observed with blurring, modelled as convolution, and stochastic error at subsequent times. The aim is then to reduce the signal-to-noise ratio by averaging over the distinct images. In this context it is relevant to decide whether the images are still equal, or have changed by outside influences such as moving of the object table. Copyright (c) 2009 Royal Statistical Society.",
"We construct a testing procedure for monotonicity on some interval in the statistical models of density deconvolution and signal deblurring under white noise. The corresponding error probabilities are studied with particular attention to their asymptotic properties.",
"We analyze a new dataset from an electrophysiological recording of transmembrane currents through a bacterial membrane channel to demonstrate the existence of single and multiple channel currents. Protein channels mediate transport through biological membranes; knowledge of the channel properties gained from electrophysiological recordings is important for a targeted drug design. We investigate the bacterial membrane protein SecYEG which is of essential importance for the secretory pathway for sorting of newly synthesized proteins to their place of function in the cell. Our results strongly indicate that in the SecYEG pore the different modes of the density of channel currents are approximately equidistant and correspond to different numbers of open channels in the membrane. A current of ≈12 pA under the present experimental conditions turns out to be characteristic of the presence of a single open SecYEG pore, a fact that had not been electrophysiologically characterized so far. Electrophysiological reco..."
]
}
|
1107.1404
|
2953315821
|
We derive multiscale statistics for deconvolution in order to detect qualitative features of the unknown density. An important example covered within this framework is to test for local monotonicity on all scales simultaneously. We investigate the moderately ill-posed setting, where the Fourier transform of the error density in the deconvolution model is of polynomial decay. For multiscale testing, we consider a calibration, motivated by the modulus of continuity of Brownian motion. We investigate the performance of our results from both the theoretical and simulation based point of view. A major consequence of our work is that the detection of qualitative features of a density in a deconvolution problem is a doable task although the minimax rates for pointwise estimation are very slow.
|
Our work can also be viewed as an extension of Chaudhuri and Marron @cite_45 as well as D "umbgen and Walther @cite_1 who treated the case @math (with @math in @cite_1 ) in the direct case, i.e. when @math . However, the approach in @cite_45 does not allow for sequences of bandwidths tending to zero and yields limit distributions depending on unknown quantities again. The methods in @cite_1 require a deterministic coupling result. The latter allows to consider the multiscale approximation for @math only, but it cannot be transferred to the deconvolution setting.
|
{
"cite_N": [
"@cite_45",
"@cite_1"
],
"mid": [
"2132685693",
"1976142330"
],
"abstract": [
"Scale space theory from computer vision leads to an interesting and novel approach to nonparametric curve estimation. The family of smooth curve estimates indexed by the smoothing parameter can be represented as a surface called the scale space surface. The smoothing parameter here plays the same role as that played by the scale of resolution in a visual system. In this paper, we study in detail various features of that surface from a statistical viewpoint. Weak convergence of the empirical scale space surface to its theoretical counterpart and some related asymptotic results have been established under appropriate regularity conditions. Our theoretical analysis provides new insights into nonparametric smoothing procedures and yields useful techniques for statistical exploration of features in the data. In particular, we have used the scale space approach for the development of an effective exploratory data analytic tool called SiZer.",
"We introduce a multiscale test statistic based on local order statistics and spacings that provides simultaneous confidence statements for the existence and location of local increases and decreases of a density or a failure rate. The procedure provides guaranteed finite-sample significance levels, is easy to implement and possesses certain asymptotic optimality and adaptivity properties."
]
}
|
1107.0690
|
2172209041
|
The process of design and development of virtual environments can be supported by tools and frameworks, to save time in technical aspects and focusing on the content. In this paper we present an academic framework which provides several levels of abstraction to ease this work. It includes state-of-the-art components we devised or integrated adopting open-source solutions in order to face specific problems. Its architecture is modular and customizable, the code is open-source.
|
Literature and academic interest in supporting videogames development is growing. There are some works sharing similarities with ours, e.g., @cite_1 developed a Virtual Reality platform, designing server and client applications, including a rendering engine based on OGRE , combined with RakNet for networking features, and finally they developed an example application. Also @cite_5 recently presented a platform, for simulating virtual environments where users communicate and interact each other, using avatars with facial expressions and body motions. Some interesting techniques of computer vision and physics effects have been implemented to increase the realism of the simulation. In a broader panorama, Hu- @cite_0 discussed the design and implementation of a game platform capable of running online games, both from client and server perspective. Concluding, Graham and Roberts @cite_2 analyzed the videogame development process from a qualitative perspective, trying to define attributes of 3D games and adopting interesting criteria to help achieving desired standards of quality during the design of academic and commercial products.
|
{
"cite_N": [
"@cite_0",
"@cite_5",
"@cite_1",
"@cite_2"
],
"mid": [
"2380161446",
"2104795399",
"2155611480",
"1593161808"
],
"abstract": [
"The technical issues of how to construct the game platform and how to realize communication between server and client under network circumstance were discussed.By aactual case study,a solution for data communication between clientand serverwas developed,which was not only compatible with windows message driven mechanism,but also solved the concurrence operating problems in server side.The issues are programmed in C++ language,and has the transportable,reusable and expandable advantages.",
"In this paper we present a platform called VhCVE, in which relevant issues related to Collaborative Virtual Environments applications are integrated. The main goal is to provide a framework where participants can interact with others by voice and chat. Also, manipulation tools such as a mouse using Computer Vision and Physics are included, as well as rendering techniques (e.g. light sources, shadows and weather effects). In addition, avatar animation in terms of face and body motion are provided. Results indicate that our platform can be used as a interactive virtual world to help communication among people.",
"A Virtual Reality system with multi-interaction based on working distributed was designed to solve the application problem of VR; developed the server end, the render end and the control end of system used the graph rendering engine OGRE combined with the network engine Raknet, and an application example is presented .",
"The development of video games is a complex software engineering activity bringing together large multidisciplinary teams under stringent constraints. While much has been written about how to develop video games, there has been as yet little attempt to view video game development from a quality perspective, attempting to enumerate the quality attributes that must be satisfied by game implementations, and to relate implementation techniques to those quality attributes. In this paper, we discuss desired quality attributes of 3D computer games, and we use the development of our own Life is a Village game to illustrate architectural tactics that help achieve these desired qualities."
]
}
|
1107.0414
|
2047057308
|
In this paper we address the problem of understanding the success of algorithms that organize patches according to graph-based metrics. Algorithms that analyze patches extracted from images or time series have led to state-of-the art techniques for classification, denoising, and the study of nonlinear dynamics. The main contribution of this work is to provide a theoretical explanation for the above experimental observations. Our approach relies on a detailed analysis of the commute time metric on prototypical graph models that epitomize the geometry observed in general patch graphs. We prove that a parametrization of the graph based on commute times shrinks the mutual distances between patches that correspond to rapid local changes in the signal, while the distances between patches that correspond to slow local changes expand. In effect, our results explain why the parametrization of the set of patches based on the eigenfunctions of the Laplacian can concentrate patches that correspond to rapid local changes, which would otherwise be shattered in the space of patches. While our results are based on a large sample analysis, numerical experimentations on synthetic and real data indicate that the results hold for datasets that are very small in practice.
|
From a more general perspective, this work presents an investigation into the diffusion process on the graphs models presented in Section . Our work is thus related to a large body of work on the analysis of complex and random networks using first-passage time (e.g. @cite_37 and references therein). This area if usually motivated by physical problems such as transport in disordered media, neuron firing, or energy flow on power-grids instead of applications in signal processing.
|
{
"cite_N": [
"@cite_37"
],
"mid": [
"1995519784"
],
"abstract": [
"How long does it take a random walker to reach a given target point? This quantity, known as a first passage time, is important because of its crucial role in various situations such as spreading of diseases or target search processes. This paper develops a general theory that allows the accurate evaluation of the mean first passage time in complex media. The predictions are confirmed by numerical simulations of several representative models of disordered media, fractals, anomalous diffusion and scale free networks."
]
}
|
1106.5992
|
2140472709
|
We report on a data-driven investigation aimed at understanding the dynamics of message spreading in a real-world dynamical network of human proximity. We use data collected by means of a proximity-sensing network of wearable sensors that we deployed at three different social gatherings, simultaneously involving several hundred individuals. We simulate a message spreading process over the recorded proximity network, focusing on both the topological and the temporal properties. We show that by using an appropriate technique to deal with the temporal heterogeneity of proximity events, a universal statistical pattern emerges for the delivery times of messages, robust across all the data sets. Our results are useful to set constraints for generic processes of data dissemination, as well as to validate established models of human mobility and proximity that are frequently used to simulate realistic behaviors.
|
Human trajectories are often approximated by random walk models. Measurements suggest that animal but also human trajectories can be approximated by L ' e vy flights @cite_15 @cite_3 @cite_13 @cite_33 . Mobility patterns of individuals at the geographic scale, as obtained from mobile phones, show that the distribution of displacements over all users is well approximated by a truncated power-law @cite_27 . Investigating mobility at the geographic scale, however, does not shed light on the shorter-range scale that is relevant for individual mobility and proximity in contexts that are relevant for data diffusion and its applications.
|
{
"cite_N": [
"@cite_33",
"@cite_3",
"@cite_27",
"@cite_15",
"@cite_13"
],
"mid": [
"2037571815",
"2153248811",
"1982300822",
"2053363673",
"2155528675"
],
"abstract": [
"There is substantial interest in the effect of human mobility patterns on opportunistic communications. Inspired by recent work revisiting some of the early evidence for a L´evy flight foraging strategy in animals, we analyse datasets on human contact from real world traces. By analysing the distribution of inter-contact times on different time scales and using different graphical forms, we find not only the highly skewed distributions of waiting times highlighted in previous studies but also clear circadian rhythm. The relative visibility of these two components depends strongly on which graphical form is adopted and the range of time scales. We use a simple model to reconstruct the observed behaviour and discuss the implications of this for forwarding efficiency.",
"We report that human walks performed in outdoor settings of tens of kilometers resemble a truncated form of Levy walks commonly observed in animals such as monkeys, birds and jackals. Our study is based on about one thousand hours of GPS traces involving 44 volunteers in various outdoor settings including two different college campuses, a metropolitan area, a theme park and a state fair. This paper shows that many statistical features of human walks follow truncated power-law, showing evidence of scale-freedom and do not conform to the central limit theorem. These traits are similar to those of Levy walks. It is conjectured that the truncation, which makes the mobility deviate from pure Levy walks, comes from geographical constraints including walk boundary, physical obstructions and traffic. None of commonly used mobility models for mobile networks captures these properties. Based on these findings, we construct a simple Levy walk mobility model which is versatile enough in emulating diverse statistical patterns of human walks observed in our traces. The model is also used to recreate similar power-law inter-contact time distributions observed in previous human mobility studies. Our network simulation indicates that the Levy walk features are important in characterizing the performance of mobile network routing performance.",
"This study used a sample of 100,000 mobile phone users whose trajectory was tracked for six months to study human mobility patterns. Displacements across all users suggest behaviour close to the Levy-flight-like pattern observed previously based on the motion of marked dollar bills, but with a cutoff in the distribution. The origin of the Levy patterns observed in the aggregate data appears to be population heterogeneity and not Levy patterns at the level of the individual.",
"Newtonian physics began with an attempt to make precise predictions about natural phenomena, predictions that could be accurately checked by observation and experiment. The goal was to understand nature as a deterministic, “clockwork” universe. The application of probability distributions to physics developed much more slowly. Early uses of probability arguments focused on distributions with well‐defined means and variances. The prime example was the Gaussian law of errors, in which the mean traditionally represented the most probable value from a series of repeated measurements of a fixed quantity, and the variance was related to the uncertainty of those measurements.",
"The routing performance of delay tolerant networks (DTN) is highly correlated with the distribution of inter-contact times (ICT), the time period between two successive contacts of the same two mobile nodes. As humans are often carriers of mobile communication devices, studying the patterns of human mobility is an essential tool to understand the performance of DTN protocols. From measurement studies of human contact behaviors, we find that their distributions closely resemble a form of power-law distributions called truncated Pareto. Human walk traces has a dichotomy distribution pattern of ICT; it has a power-law tendency up to some time, and decays exponentially after that time. Truncated Pareto distributions offer a simple yet cohesive mathematical model to express this dichotomy in the measured data. Using the residual and relaxation time theory [17] [4], we apply truncated Pareto distributions to quantify the performance of opportunistic routing in DTN. We further show that Truncated Levy walk (TLW) mobility model [22] commonly used in biology to describe the foraging patterns of animals [25], provide the same truncated power-law ICT distributions as observed from the empirical data, especially when mobility is confined within a finite area. This result confirms our recent finding that human walks contain similar statistical characteristics as Levy walks [22]."
]
}
|
1106.5992
|
2140472709
|
We report on a data-driven investigation aimed at understanding the dynamics of message spreading in a real-world dynamical network of human proximity. We use data collected by means of a proximity-sensing network of wearable sensors that we deployed at three different social gatherings, simultaneously involving several hundred individuals. We simulate a message spreading process over the recorded proximity network, focusing on both the topological and the temporal properties. We show that by using an appropriate technique to deal with the temporal heterogeneity of proximity events, a universal statistical pattern emerges for the delivery times of messages, robust across all the data sets. Our results are useful to set constraints for generic processes of data dissemination, as well as to validate established models of human mobility and proximity that are frequently used to simulate realistic behaviors.
|
Yoneki @cite_16 points out the importance of collecting real world data when modeling contact networks. Most available data sets that cover the short-range scale use Bluetooth or WiFi technologies to measure device proximity @cite_16 . @cite_24 extract mobility models from user traces, focusing on node localization and path tracing: the analyzed characteristics are node speeds and pause times that follow a log-normal distribution. @cite_5 present an experiment that involved about 40 participants at the Infocom 2005 conference, and report power-law distributions for the time intervals between node contacts.
|
{
"cite_N": [
"@cite_24",
"@cite_5",
"@cite_16"
],
"mid": [
"2137688035",
"2112701160",
"2153882980"
],
"abstract": [
"Understanding user mobility is critical for simula- tions of mobile devices in a wireless network, but current mobility models often do not reflect real user movements. In this paper, we provide a foundation for such work by exploring mobility characteristics in traces of mobile users. We present a method to estimate the physical location of users from a large trace of mobile devices associating with access points in a wireless network. Using this method, we extracted tracks of always-on Wi-Fi devices from a 13-month trace. We discovered that the speed and pause time each follow a log-normal distribution and that the direction of movements closely reflects the direction of roads and walkways. Based on the extracted mobility characteristics, we developed a mobility model, focusing on movements among popular regions. Our validation shows that synthetic tracks match real tracks with a median relative error of 17 .",
"Pocket Switched Networks (PSN) make use of both human mobility and local global connectivity in order to transfer data between mobile users' devices. This falls under the Delay Tolerant Networking (DTN) space, focusing on the use of opportunistic networking. One key problem in PSN is in designing forwarding algorithms which cope with human mobility patterns. We present an experiment measuring forty-one humans' mobility at the Infocom 2005 conference. The results of this experiment are similar to our previous experiments in corporate and academic working environments, in exhibiting a power-law distrbution for the time between node contacts. We then discuss the implications of these results on the design of forwarding algorithms for PSN.",
"The recently developed small wireless devices ranging from sensor boards to mobile phones provide a timely opportunity to gather unique data sets on complex human interactions, which in turn will support rich and meaningful modelling of the underlying networks. We are convinced that this approach will be fruitful and effective for tackling various issues such as infectious disease spread and beyond.Human social dynamics are far more complex than the currentsimplified models in network theory, and the data drivenmodelling with large-scale experimental results is essential for understanding and building systems that exploit real networks. An important issue to be addressed for taking empirical and heuristic approach is understanding the characteristics of data such as data collection methods, limitations, scale of noise, and so forth. This aspect has been many times neglected duringinference of data. We express the importance of data drivenapproach in this paper."
]
}
|
1106.5992
|
2140472709
|
We report on a data-driven investigation aimed at understanding the dynamics of message spreading in a real-world dynamical network of human proximity. We use data collected by means of a proximity-sensing network of wearable sensors that we deployed at three different social gatherings, simultaneously involving several hundred individuals. We simulate a message spreading process over the recorded proximity network, focusing on both the topological and the temporal properties. We show that by using an appropriate technique to deal with the temporal heterogeneity of proximity events, a universal statistical pattern emerges for the delivery times of messages, robust across all the data sets. Our results are useful to set constraints for generic processes of data dissemination, as well as to validate established models of human mobility and proximity that are frequently used to simulate realistic behaviors.
|
A wide range of commonly used models can be found in the survey about mobility models and ad-hoc networks by @cite_21 . The authors emphasize the need to devise accurate mobility models, and explore the limitations of current modeling strategies, pointing out that models with no memory (Random Walk and Random Waypoint) describe nodes whose actions are independent from one another. On the other hand, group mobility models, such as the Nomadic Community Mobility Model, aim at representing the behavior of nodes as they move together. @cite_3 model human contact networks using a generative model of human walk patterns based on L ' e vy flights, and reproduce the fat-tailed distribution of inter-contact times observed in empirical data of human mobility. This model is later used to characterize the routing performance in human-driven DTNs @cite_13 , predicting the message delivery ratio. No analysis is reported that takes into account the role of causality in the process of message diffusion based on these models.
|
{
"cite_N": [
"@cite_21",
"@cite_13",
"@cite_3"
],
"mid": [
"2148135143",
"2155528675",
"2153248811"
],
"abstract": [
"In the performance evaluation of a protocol for an ad hoc network, the protocol should be tested under realistic conditions including, but not limited to, a sensible transmission range, limited buffer space for the storage of messages, representative data traffic models, and realistic movements of the mobile users (i.e., a mobility model). This paper is a survey of mobility models that are used in the simulations of ad hoc networks. We describe several mobility models that represent mobile nodes whose movements are independent of each other (i.e., entity mobility models) and several mobility models that represent mobile nodes whose movements are dependent on each other (i.e., group mobility models). The goal of this paper is to present a number of mobility models in order to offer researchers more informed choices when they are deciding upon a mobility model to use in their performance evaluations. Lastly, we present simulation results that illustrate the importance of choosing a mobility model in the simulation of an ad hoc network protocol. Specifically, we illustrate how the performance results of an ad hoc network protocol drastically change as a result of changing the mobility model simulated.",
"The routing performance of delay tolerant networks (DTN) is highly correlated with the distribution of inter-contact times (ICT), the time period between two successive contacts of the same two mobile nodes. As humans are often carriers of mobile communication devices, studying the patterns of human mobility is an essential tool to understand the performance of DTN protocols. From measurement studies of human contact behaviors, we find that their distributions closely resemble a form of power-law distributions called truncated Pareto. Human walk traces has a dichotomy distribution pattern of ICT; it has a power-law tendency up to some time, and decays exponentially after that time. Truncated Pareto distributions offer a simple yet cohesive mathematical model to express this dichotomy in the measured data. Using the residual and relaxation time theory [17] [4], we apply truncated Pareto distributions to quantify the performance of opportunistic routing in DTN. We further show that Truncated Levy walk (TLW) mobility model [22] commonly used in biology to describe the foraging patterns of animals [25], provide the same truncated power-law ICT distributions as observed from the empirical data, especially when mobility is confined within a finite area. This result confirms our recent finding that human walks contain similar statistical characteristics as Levy walks [22].",
"We report that human walks performed in outdoor settings of tens of kilometers resemble a truncated form of Levy walks commonly observed in animals such as monkeys, birds and jackals. Our study is based on about one thousand hours of GPS traces involving 44 volunteers in various outdoor settings including two different college campuses, a metropolitan area, a theme park and a state fair. This paper shows that many statistical features of human walks follow truncated power-law, showing evidence of scale-freedom and do not conform to the central limit theorem. These traits are similar to those of Levy walks. It is conjectured that the truncation, which makes the mobility deviate from pure Levy walks, comes from geographical constraints including walk boundary, physical obstructions and traffic. None of commonly used mobility models for mobile networks captures these properties. Based on these findings, we construct a simple Levy walk mobility model which is versatile enough in emulating diverse statistical patterns of human walks observed in our traces. The model is also used to recreate similar power-law inter-contact time distributions observed in previous human mobility studies. Our network simulation indicates that the Levy walk features are important in characterizing the performance of mobile network routing performance."
]
}
|
1106.5992
|
2140472709
|
We report on a data-driven investigation aimed at understanding the dynamics of message spreading in a real-world dynamical network of human proximity. We use data collected by means of a proximity-sensing network of wearable sensors that we deployed at three different social gatherings, simultaneously involving several hundred individuals. We simulate a message spreading process over the recorded proximity network, focusing on both the topological and the temporal properties. We show that by using an appropriate technique to deal with the temporal heterogeneity of proximity events, a universal statistical pattern emerges for the delivery times of messages, robust across all the data sets. Our results are useful to set constraints for generic processes of data dissemination, as well as to validate established models of human mobility and proximity that are frequently used to simulate realistic behaviors.
|
Most analytical frameworks for message diffusion, such as Ref. @cite_25 are stochastic models used to compute message delay distributions based on parameters describing communication range and inter-contact time distributions, with no special characterization of the causal structure of message propagation. Other works such as Refs. @cite_17 and @cite_9 also focus on the analysis of the distributions of inter-meeting intervals.
|
{
"cite_N": [
"@cite_9",
"@cite_25",
"@cite_17"
],
"mid": [
"2153112806",
"2105273407",
"2135365254"
],
"abstract": [
"Numerous routing protocols have been proposed for Delay Tolerant Networking. One class of routing protocols aims at optimizing the delivery performance by using knowledge of previous encounters for forecasting the future contacts to determine suitable next hops for a given packet. Protocols pursuing such an approach face a fundamental challenge of choosing the right protocol parameters and the right time scale for estimation. These, in turn, depend on the mobility characteristics of the mobile nodes which are likely to vary within one scenario and across different ones. We characterise this issue, which has been overlooked in this field so far, using PROPHET and MaxPROP as two representative routing protocols and derive mechanisms to dynamically and independently determine routing parameters in mobile nodes.",
"A stochastic model is introduced that accurately models the message delay in mobile ad hoc networks where nodes relay messages and the networks are sparsely populated. The model has only two input parameters: the number of nodes and the parameter of an exponential distribution which describes the time until two random mobiles come within communication range of one another. Closed-form expressions are obtained for the Laplace-Stieltjes transform of the message delay, defined as the time needed to transfer a message between a source and a destination. From this we derive both a closed-form expression and an asymptotic approximation (as a function of the number of nodes) of the expected message delay. As an additional result, the probability distribution function is obtained for the number of copies of the message at the time the message is delivered. These calculations are carried out for two protocols: the two-hop multicopy and the unrestricted multicopy protocols. It is shown that despite its simplicity, the model accurately predicts the message delay for both relay strategies for a number of mobility models (the random waypoint, random direction and the random walker mobility models).",
"Inter-meeting time between mobile nodes is one of the key metrics in a Mobile Ad-hoc Network (MANET) and central to the end-to-end delay and forwarding algorithms. It is typically assumed to be exponentially distributed in many performance studies of MANET or numerically shown to be exponentially distributed under most existing mobility models in the literature. However, recent empirical results show otherwise: the inter-meeting time distribution in fact follows a power-law. This outright discrepancy potentially undermines our understanding of the performance tradeoffs in MANET obtained under the exponential distribution ofthe inter-meeting time, and thus calls for further study on the power-law inter-meeting time including its fundamental cause, mobility modeling, and its effect. In this paper, we rigorously prove that a finite domain, on which most of the current mobility models are defined, plays an important role in creating the exponential tail of the inter-meeting time. We also prove that by simply removing the boundary in a simple two-dimensional isotropic random walk model, we are able to obtain the empirically observed power-law decay of the inter-meeting time. We then discuss the relationship between the size of the boundary and the relevant time scale of the network scenario under consideration. Our results thus provide guidelines on the design of new mobility models with power-law inter-meeting time distribution, new protocols including packet forwarding algorithms, as well as their performance analysis."
]
}
|
1106.5992
|
2140472709
|
We report on a data-driven investigation aimed at understanding the dynamics of message spreading in a real-world dynamical network of human proximity. We use data collected by means of a proximity-sensing network of wearable sensors that we deployed at three different social gatherings, simultaneously involving several hundred individuals. We simulate a message spreading process over the recorded proximity network, focusing on both the topological and the temporal properties. We show that by using an appropriate technique to deal with the temporal heterogeneity of proximity events, a universal statistical pattern emerges for the delivery times of messages, robust across all the data sets. Our results are useful to set constraints for generic processes of data dissemination, as well as to validate established models of human mobility and proximity that are frequently used to simulate realistic behaviors.
|
Another relevant area deals with modeling data dissemination in opportunistic networks. In Ref. @cite_2 data is proactively disseminated using a strategy based on the utility of the data itself. Utility is defined on top of existing social relationships between users, and the resulting Markovian model is validated in simulation only.
|
{
"cite_N": [
"@cite_2"
],
"mid": [
"2166093893"
],
"abstract": [
"In opportunistic networks data dissemination is an important, although not widely explored, topic. Since opportunistic networks topologies are very challenged and unstable, data-centric approaches are an interesting direction to pursue. Data should be proactively and cooperatively disseminated from sources towards possibly interested receivers, as sources and receivers might not be aware of each other, and never get in touch directly. In this paper we consider a utility-based cooperative data dissemination system in which the utility of data is defined based on the social relationships between users. Specifically, we study the performance of this system through an analytical model. Our model allows us to completely characterise the data dissemination process, as it describes both its stationary and transient regimes. After validating the model, we study the system's behaviour with respect to key parameters such as the definition of the data utility function, the initial data allocation on nodes, the number of users in the system, and the data popularity."
]
}
|
1106.5992
|
2140472709
|
We report on a data-driven investigation aimed at understanding the dynamics of message spreading in a real-world dynamical network of human proximity. We use data collected by means of a proximity-sensing network of wearable sensors that we deployed at three different social gatherings, simultaneously involving several hundred individuals. We simulate a message spreading process over the recorded proximity network, focusing on both the topological and the temporal properties. We show that by using an appropriate technique to deal with the temporal heterogeneity of proximity events, a universal statistical pattern emerges for the delivery times of messages, robust across all the data sets. Our results are useful to set constraints for generic processes of data dissemination, as well as to validate established models of human mobility and proximity that are frequently used to simulate realistic behaviors.
|
Related work focuses on mobile content distribution @cite_0 and delay-tolerant networks @cite_13 . In Ref. @cite_19 the authors point out that validating mobility models is challenging because of lacking experimental data, and suggest to analyze encounters between individuals rather than their full mobility traces. Ref. @cite_22 reports interesting insights on the influence of contact dynamics over routing strategies in delay tolerant networks.
|
{
"cite_N": [
"@cite_0",
"@cite_19",
"@cite_13",
"@cite_22"
],
"mid": [
"2011750465",
"2152872961",
"2155528675",
"1991690766"
],
"abstract": [
"In this poster, we introduce our Bluespots project, in which a small computer on a bus serves as a Bluetooth Content Distribution (BCD) station in a university public transit scenario.",
"The popularity of handheld devices has created a flurry of research activity into new protocols and applications that can handle and exploit the defining characteristic of this new environment - user mobility. In addition to mobility, another defining characteristic of mobile systems is user social interaction. This paper investigates how mobile systems could exploit people's social interactions to improve these systems' performance and query hit rate. For this, we build a trace-driven simulator that enables us to re-create the behavior of mobile systems in a social environment. We use our simulator to study three diverse mobile systems: DTN routing protocols, firewalls preventing a worm infection, and a mobile P2P file-sharing system. In each of these three cases, we find that mobile systems can benefit substantially from exploiting social information.",
"The routing performance of delay tolerant networks (DTN) is highly correlated with the distribution of inter-contact times (ICT), the time period between two successive contacts of the same two mobile nodes. As humans are often carriers of mobile communication devices, studying the patterns of human mobility is an essential tool to understand the performance of DTN protocols. From measurement studies of human contact behaviors, we find that their distributions closely resemble a form of power-law distributions called truncated Pareto. Human walk traces has a dichotomy distribution pattern of ICT; it has a power-law tendency up to some time, and decays exponentially after that time. Truncated Pareto distributions offer a simple yet cohesive mathematical model to express this dichotomy in the measured data. Using the residual and relaxation time theory [17] [4], we apply truncated Pareto distributions to quantify the performance of opportunistic routing in DTN. We further show that Truncated Levy walk (TLW) mobility model [22] commonly used in biology to describe the foraging patterns of animals [25], provide the same truncated power-law ICT distributions as observed from the empirical data, especially when mobility is confined within a finite area. This result confirms our recent finding that human walks contain similar statistical characteristics as Levy walks [22].",
"In this paper we focus on how the heterogeneous contact dynamics of mobile nodes impact the performance of forwarding routing algorithms in delay disruption-tolerant networks (DTNs). To this end, we consider two representative heterogeneous network models, each of which captures heterogeneity among node pairs (individual) and heterogeneity in underlying environment (spatial), respectively, and examine the full extent of difference in delay performances they cause on forwarding routing algorithms through formal stochastic comparisons. We first show that these heterogeneous models correctly capture non-Poisson contact dynamics observed in real traces. Then, we consider direct forwarding and multicopy two-hop relay protocol and rigorously establish stochastic convex ordering relationships on their delay performances under these heterogeneous models and the corresponding homogeneous model, all of which have the same average inter-contact time over all node pairs. We show that heterogeneous models predict an entirely opposite ordering relationship in the delay performances depending on which of the two heterogeneities is captured. This suggests that merely capturing non-Poisson contact dynamics - even if the entire distribution of aggregated inter-contact time is precisely matched, is not enough and that one should carefully evaluate the performance of forwarding routing algorithms under a properly chosen heterogeneous network setting. Our results will also be useful in correctly exploiting the underlying heterogeneity structure so as to achieve better performance in DTNs."
]
}
|
1106.5992
|
2140472709
|
We report on a data-driven investigation aimed at understanding the dynamics of message spreading in a real-world dynamical network of human proximity. We use data collected by means of a proximity-sensing network of wearable sensors that we deployed at three different social gatherings, simultaneously involving several hundred individuals. We simulate a message spreading process over the recorded proximity network, focusing on both the topological and the temporal properties. We show that by using an appropriate technique to deal with the temporal heterogeneity of proximity events, a universal statistical pattern emerges for the delivery times of messages, robust across all the data sets. Our results are useful to set constraints for generic processes of data dissemination, as well as to validate established models of human mobility and proximity that are frequently used to simulate realistic behaviors.
|
A routing approach that is particularly relevant to our work is the Epidemic Routing approach @cite_10 commonly used to model forwarding and routing protocols in ad-hoc networks. It provides message delivery in disconnected environments where few assumptions are made about node mobility or future network topology. Analogies with susceptible-infected models of infection diffusion are straightforward: the infectious'' agent is a data packet, and nodes infect'' their neighbors by transmitting the data packet to them. This method is commonly proposed for highly mobile contexts in which a path from source to destination may not exist at all times. It is however demanding in terms of resources, as the network is essentially flooded. Many epidemic routing strategies have been proposed and evaluated in the literature @cite_31 @cite_12 @cite_8 @cite_6 .
|
{
"cite_N": [
"@cite_8",
"@cite_6",
"@cite_31",
"@cite_10",
"@cite_12"
],
"mid": [
"2139352451",
"2408452547",
"2109528718",
"1572481965",
"2129849999"
],
"abstract": [
"Epidemic routing has been proposed to reduce the data transmission delay in disruption tolerant wireless networks, in which data can be replicated along multiple opportunistic paths as different nodes move within each other's communication range. With the advent of network coding, it is intuitive that data can not only be replicated, but also coded, when the transmission opportunity arises. However, will opportunistic communication with network coding perform any better than simple replications? In this paper, we present a stochastic analytical framework to study the performance of epidemic routing using network coding in opportunistic networks, as compared to the use of replication. We analytically show that network coding is superior when bandwidth and node buffers are limited, reflecting more realistic scenarios. Our analytical study is able to provide further insights towards future designs of efficient data communication protocols using network coding. As an example, we propose a priority based coding protocol, with which the destination can decode a high priority subset of the data much earlier than it can decode any data without the use of priorities. The correctness of our analytical results has also been confirmed by our extensive simulations.",
"The emergence of Delay Tolerant Networks (DTNs) has culminated in a new generation of wireless networking. New communication paradigms, which use dynamic interconnectedness as people encounter each other opportunistically, lead towards a world where digital traffic flows more easily. We focus on humanto- human communication in environments that exhibit the characteristics of social networks. This paper describes our study of information flow during epidemic spread in such dynamic human networks, a topic which shares many issues with network-based epidemiology. We explore hub nodes extracted from real world connectivity traces and show their influence on the epidemic to demonstrate the characteristics of information propagation.",
"In this paper, we develop a rigorous, unified framework based on ordinary differential equations (ODEs) to study epidemic routing and its variations. These ODEs can be derived as limits of Markovian models under a natural scaling as the number of nodes increases. While an analytical study of Markovian models is quite complex and numerical solution impractical for large networks, the corresponding ODE models yield closed-form expressions for several performance metrics of interest, and a numerical solution complexity that does not increase with the number of nodes. Using this ODE approach, we investigate how resources such as buffer space and the number of copies made for a packet can be traded for faster delivery, illustrating the differences among various forwarding and recovery schemes considered. We perform model validations through simulation studies. Finally we consider the effect of buffer management by complementing the forwarding models with Markovian and fluid buffer models.",
"Mobile ad hoc routing protocols allow nodes with wireless adaptors to communicate with one another without any pre-existing network infrastructure. Existing ad hoc routing protocols, while robust to rapidly changing network topology, assume the presence of a connected path from source to destination. Given power limitations, the advent of short-range wireless networks, and the wide physical conditions over which ad hoc networks must be deployed, in some scenarios it is likely that this assumption is invalid. In this work, we develop techniques to deliver messages in the case where there is never a connected path from source to destination or when a network partition exists at the time a message is originated. To this end, we introduce Epidemic Routing, where random pair-wise exchanges of messages among mobile hosts ensure eventual message delivery. The goals of Epidemic Routing are to: i) maximize message delivery rate, ii) minimize message latency, and iii) minimize the total resources consumed in message delivery. Through an implementation in the Monarch simulator, we show that Epidemic Routing achieves eventual delivery of 100 of messages with reasonable aggregate resource consumption in a number of interesting scenarios.",
"Intermittently connected mobile networks are wireless networks where most of the time there does not exist a complete path from the source to the destination. There are many real networks that follow this model, for example, wildlife tracking sensor networks, military networks, vehicular ad hoc networks, etc. In this context, conventional routing schemes fail, because they try to establish complete end-to-end paths, before any data is sent. To deal with such networks researchers have suggested to use flooding-based routing schemes. While flooding-based schemes have a high probability of delivery, they waste a lot of energy and suffer from severe contention which can significantly degrade their performance. Furthermore, proposed efforts to reduce the overhead of flooding-based schemes have often been plagued by large delays. With this in mind, we introduce a new family routing schemes that \"spray\" a few message copies into the network, and then route each copy independently towards the destination. We show that, if carefully designed, spray routing not only performs significantly fewer transmissions per message, but also has lower average delivery delays than existing schemes; furthermore, it is highly scalable and retains good performance under a large range of scenarios. Finally, we use our theoretical framework proposed in our 2004 paper to analyze the performance of spray routing. We also use this theory to show how to choose the number of copies to be sprayed and how to optimally distribute these copies to relays."
]
}
|
1106.5992
|
2140472709
|
We report on a data-driven investigation aimed at understanding the dynamics of message spreading in a real-world dynamical network of human proximity. We use data collected by means of a proximity-sensing network of wearable sensors that we deployed at three different social gatherings, simultaneously involving several hundred individuals. We simulate a message spreading process over the recorded proximity network, focusing on both the topological and the temporal properties. We show that by using an appropriate technique to deal with the temporal heterogeneity of proximity events, a universal statistical pattern emerges for the delivery times of messages, robust across all the data sets. Our results are useful to set constraints for generic processes of data dissemination, as well as to validate established models of human mobility and proximity that are frequently used to simulate realistic behaviors.
|
Some effort has also been devoted to characterize forwarding paths. @cite_4 state that the structure of mobility networks is in general characterized by a small diameter, i.e., a device can be reached using a small number of relays. This is shown analytically for random graphs, and empirically based on data from conference deployments. Based on this observation, the authors introduce an efficient algorithm to compute the delay-optimal path between nodes that exploits the small-world character of the underlying mobility network. @cite_28 investigate message forwarding in conference settings and characterize optimal paths in time and space. They find that these paths, while optimal, may take a very long time to reach the destination (thousands of seconds), and report a so-called path explosion phenomenon'', i.e., that shortly after the optimal path reaches the destination, a large number of nearly-optimal paths does the same.
|
{
"cite_N": [
"@cite_28",
"@cite_4"
],
"mid": [
"2148899275",
"2164988386"
],
"abstract": [
"Forwarding in Delay Tolerant Networks (DTNs) is a challenging problem. We focus on the specific issue of forwarding in an environment where mobile devices are carried by people in a restricted physical space (a conference) and contact patterns are not predictable. We show for the first time a path explosion phenomenon between most pairs of nodes. This means that, once the first path reaches the destination, the number of subsequent paths grows rapidly with time, so there usually exist many near-optimal paths. We study the path explosion phenomenon both analytically and empirically. Our results highlight the importance of unequal contact rates across nodes for understanding the performance of forwarding algorithms. We also find that a variety of well-known forwarding algorithms show surprisingly similar performance in our setting and we interpret this fact in light of the path explosion phenomenon.",
"Portable devices have more data storage and increasing communication capabilities everyday. In addition to classic infrastructure based communication, these devices can exploit human mobility and opportunistic contacts to communicate. We analyze the characteristics of such opportunistic forwarding paths. We establish that opportunistic mobile networks in general are characterized by a small diameter, a destination device is reachable using only a small number of relays under tight delay constraint. This property is first demonstrated analytically on a family of mobile networks which follow a random graph process. We then establish a similar result empirically with four data sets capturing human mobility, using a new methodology to efficiently compute all the paths that impact the diameter of an opportunistic mobile networks. We complete our analysis of network diameter by studying the impact of intensity of contact rate and contact duration. This work is, to our knowledge, the first validation that the so called \"small world\" phenomenon applies very generally to opportunistic networking between mobile nodes."
]
}
|
1106.5992
|
2140472709
|
We report on a data-driven investigation aimed at understanding the dynamics of message spreading in a real-world dynamical network of human proximity. We use data collected by means of a proximity-sensing network of wearable sensors that we deployed at three different social gatherings, simultaneously involving several hundred individuals. We simulate a message spreading process over the recorded proximity network, focusing on both the topological and the temporal properties. We show that by using an appropriate technique to deal with the temporal heterogeneity of proximity events, a universal statistical pattern emerges for the delivery times of messages, robust across all the data sets. Our results are useful to set constraints for generic processes of data dissemination, as well as to validate established models of human mobility and proximity that are frequently used to simulate realistic behaviors.
|
Since portable devices carried by humans are becoming ubiquitous, several solutions have been proposed that exploit the interplay between the structural properties of social networks, mobility aspects, and data diffusion. Daly and Haahr @cite_20 propose an algorithm (SimBet) that uses social network properties such as betweenness centrality and social similarity to inform the routing strategy. Simulations based on real traces show a performance comparable to Epidemic Routing, without the associated overhead, and without a complete knowledge of the network topology. @cite_23 aim at using social structures to better understand human mobility and inform forwarding algorithms. Based on real-world traces, the authors observe high heterogeneity in human interactions both at the level of individuals and of communities. The socially-aware forwarding scheme they devise (BUBBLE Rap) exploits such heterogeneity by targeting nodes with high centrality as well as members of the communities, yielding delivery ratios similar to flooding approaches, with lower resource utilization. @cite_29 propose a middleware (MobiClique) that exploits ad-hoc social interactions to disseminate information using a store-carry-forward mechanism. Data collected from the deployment of the MobiClique system at two conference gatherings demonstrates its ability to create and maintain ad-hoc social networks and communities based on physical proximity.
|
{
"cite_N": [
"@cite_29",
"@cite_23",
"@cite_20"
],
"mid": [
"2911711906",
"2135712710",
"2082674813"
],
"abstract": [
"It is our great pleasure to welcome you to the 2nd ACM SIGCOMM Workshop on Online Social Networks -- WOSN'09. This year's workshop follows a successful start in 2008 by including 11 papers selected from 30 submissions by a wide ranging PC (with assistence from additional reviewers). We sought to expand the scope of last year's event to include a wider definition of the \"social\" part of social networks, as well as a deeper set of empirical work as the area becomes more mature. Last year's themes were money, applications, trust and mobility. This year's sessions are about privacy, structure, evolution and middleware, reflecting many of the users' and developers concerns as wider deployment of OSNs occurs.",
"In this paper we seek to improve our understanding of human mobility in terms of social structures, and to use these structures in the design of forwarding algorithms for Pocket Switched Networks (PSNs). Taking human mobility traces from the real world, we discover that human interaction is heterogeneous both in terms of hubs (popular individuals) and groups or communities. We propose a social based forwarding algorithm, BUBBLE, which is shown empirically to improve the forwarding efficiency significantly compared to oblivious forwarding schemes and to PROPHET algorithm. We also show how this algorithm can be implemented in a distributed way, which demonstrates that it is applicable in the decentralised environment of PSNs.",
"Message delivery in sparse Mobile Ad hoc Networks (MANETs) is difficult due to the fact that the network graph is rarely (if ever) connected. A key challenge is to find a route that can provide good delivery performance and low end-to-end delay in a disconnected network graph where nodes may move freely. This paper presents a multidisciplinary solution based on the consideration of the so-called small world dynamics which have been proposed for economy and social studies and have recently revealed to be a successful approach to be exploited for characterising information propagation in wireless networks. To this purpose, some bridge nodes are identified based on their centrality characteristics, i.e., on their capability to broker information exchange among otherwise disconnected nodes. Due to the complexity of the centrality metrics in populated networks the concept of ego networks is exploited where nodes are not required to exchange information about the entire network topology, but only locally available information is considered. Then SimBet Routing is proposed which exploits the exchange of pre-estimated \"betweenness' centrality metrics and locally determined social \"similarity' to the destination node. We present simulations using real trace data to demonstrate that SimBet Routing results in delivery performance close to Epidemic Routing but with significantly reduced overhead. Additionally, we show that SimBet Routing outperforms PRoPHET Routing, particularly when the sending and receiving nodes have low connectivity."
]
}
|
1106.5908
|
1992303919
|
We evaluate optimized parallel sparse matrix-vector operations for several representative application areas on widespread multicore-based cluster configurations. First the single-socket baseline performance is analyzed and modeled with respect to basic architectural properties of standard multicore chips. Beyond the single node, the performance of parallel sparse matrix-vector operations is often limited by communication overhead. Starting from the observation that nonblocking MPI is not able to hide communication cost using standard MPI implementations, we demonstrate that explicit overlap of communication and computation can be achieved by using a dedicated communication thread, which may run on a virtual core. Moreover we identify performance benefits of hybrid MPI OpenMP programming due to improved load balancing even without explicit communication overlap. We compare performance results for pure MPI, the widely used "vector-like" hybrid programming strategies, and explicit overlap on a modern multicore-based cluster and a Cray XE6 system.
|
In recent years the performance of various spMVM algorithms has been evaluated by several groups @cite_2 @cite_13 @cite_18 . Covering different matrix storage formats and implementations on various types of hardware, they have reviewed a more or less large number of publicly available matrices and reported on the obtained performance. Scalable parallel spMVM implementations have also been proposed @cite_1 @cite_12 , mostly based on an MPI-only strategy. Hybrid parallel spMVM approaches have already been devised before the emergence of multicore processors @cite_19 @cite_14 . Recently a vector mode'' approach could not compete with a scalable MPI implementation for a specific problem on a Cray system @cite_1 . There is no up-to-date literature that systematically investigates novel features like multicore, ccNUMA node structure, and simultaneous multithreading (SMT) for hybrid parallel spMVM.
|
{
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_1",
"@cite_19",
"@cite_2",
"@cite_13",
"@cite_12"
],
"mid": [
"",
"2522590958",
"2092013581",
"2049585661",
"1975116854",
"2103877122",
"2062240003"
],
"abstract": [
"",
"Eigenvalue problems involving very large sparse matrices are common to various fields in science. In general, the numerical core of iterative eigenvalue algorithms is a matrix-vector multiplication (MVM) involving the large sparse matrix. We present three different programming approaches for parallel MVM on present day supercomputers. In addition to a pure message-passing approach, two hybrid parallel implementations are introduced based on simultaneous use of message-passing and shared-memory programming models. For a modern SMP cluster (HITACHI SR8000) performance and scalability of the hybrid implementations are discussed and compared with the pure message-passing approach on massively-parallel systems (CRAY T3E), vector computers (NEC SX5e) and distributed shared-memory systems (SGI Origin3800).",
"We present a massively parallel implementation of symmetric sparse matrix-vector product for modern clusters with scalar multi-core CPUs. Matrices with highly variable structure and density arising from unstructured three-dimensional FEM discretizations of mechanical and diffusion problems are studied. A metric of the effective memory bandwidth is introduced to analyze the impact on performance of a set of simple, well-known optimizations: matrix reordering, manual prefetching, and blocking. A modification to the CRS storage improving the performance on multi-core Opterons is shown. The performance of an entire SMP blade rather than the per-core performance is optimized. Even for the simplest 4 node mechanical element our code utilizes close to 100 of the per-blade available memory bandwidth. We show that reducing the storage requirements for symmetric matrices results in roughly two times speedup. Blocking brings further storage savings and a proportional performance increase. Our results are compared to existing state-of-the-art implementations of SpMV, and to the dense BLAS2 performance. Parallel efficiency on 5400 Opteron cores of the Cray XT4 cluster is around 80-90 for problems with approximately 25^3 mesh nodes per core. For a problem with 820 million degrees of freedom the code runs with a sustained performance of 5.2 TeraFLOPs, over 20 of the theoretical peak.",
"Most HPC systems are clusters of shared memory nodes. Parallel programming must combine the distributed memory parallelization on the node interconnect with the shared memory parallelization inside each node. The hybrid MPI+OpenMP programming model is compared with pure MPI, compiler based parallelization, and other parallel programming models on hybrid architectures. The paper focuses on bandwidth and latency aspects, and also on whether programming paradigms can separate the optimization of communication and computation. Benchmark results are presented for hybrid and pure MPI communication. This paper analyzes the strengths and weaknesses of several parallel programming models on clusters of SMP nodes.",
"In this paper, we revisit the performance issues of the widely used sparse matrix-vector multiplication (SpMxV) kernel on modern microarchitectures. Previous scientific work reports a number of different factors that may significantly reduce performance. However, the interaction of these factors with the underlying architectural characteristics is not clearly understood, a fact that may lead to misguided, and thus unsuccessful attempts for optimization. In order to gain an insight into the details of SpMxV performance, we conduct a suite of experiments on a rich set of matrices for three different commodity hardware platforms. In addition, we investigate the parallel version of the kernel and report on the corresponding performance results and their relation to each architecture's specific multithreaded configuration. Based on our experiments, we extract useful conclusions that can serve as guidelines for the optimization process of both single and multithreaded versions of the kernel.",
"We are witnessing a dramatic change in computer architecture due to the multicore paradigm shift, as every electronic device from cell phones to supercomputers confronts parallelism of unprecedented scale. To fully unleash the potential of these systems, the HPC community must develop multicore specific-optimization methodologies for important scientific computations. In this work, we examine sparse matrix-vector multiply (SpMV) - one of the most heavily used kernels in scientific computing - across a broad spectrum of multicore designs. Our experimental platform includes the homogeneous AMD quad-core, AMD dual-core, and Intel quad-core designs, the heterogeneous STI Cell, as well as one of the first scientific studies of the highly multithreaded Sun Victoria Falls (a Niagara2 SMP). We present several optimization strategies especially effective for the multicore environment, and demonstrate significant performance improvements compared to existing state-of-the-art serial and parallel SpMV implementations. Additionally, we present key insights into the architectural trade-offs of leading multicore design strategies, in the context of demanding memory-bound numerical algorithms.",
"Abstract The sparse matrix–vector product is an important computational kernel that runs ineffectively on many computers with super-scalar RISC processors. In this paper we analyse the performance of the sparse matrix–vector product with symmetric matrices originating from the FEM and describe techniques that lead to a fast implementation. It is shown how these optimisations can be incorporated into an efficient parallel implementation using message-passing. We conduct numerical experiments on many different machines and show that our optimisations speed up the sparse matrix–vector multiplication substantially."
]
}
|
1106.6197
|
2950436751
|
We characterize the rank of edge connection matrices of partition functions of real vertex models, as the dimension of the homogeneous components of the algebra of @math -invariant tensors. Here @math is the sub- group of the real orthogonal group that stabilizes the vertex model. This answers a question of Bal 'azs Szegedy from 2007.
|
In @cite_5 , Lov 'asz characterized the rank of vertex connection matrices of partition functions of real valued weighted spin models. In order to state his result, we first need to introduce some terminology. This is not used anywhere else in this paper.
|
{
"cite_N": [
"@cite_5"
],
"mid": [
"1975568166"
],
"abstract": [
"Connection matrices were introduced in [M. Freedman, L. Lovasz, A. Schrijver, Reflection positivity, rank connectivity, and homomorphism of graphs (MSR Tech Report # MSR-TR-2004-41) ftp: ftp.research.microsoft.com pub tr TR-2004-41.pdf], where they were used to characterize graph homomorphism functions. The goal of this note is to determine the exact rank of these matrices. The result can be rephrased in terms of the dimension of graph algebras, also introduced in the same paper. Yet another version proves that if two k-tuples of nodes behave in the same way from the point of view of graph homomorphisms, then they are equivalent under the automorphism group."
]
}
|
1106.6197
|
2950436751
|
We characterize the rank of edge connection matrices of partition functions of real vertex models, as the dimension of the homogeneous components of the algebra of @math -invariant tensors. Here @math is the sub- group of the real orthogonal group that stabilizes the vertex model. This answers a question of Bal 'azs Szegedy from 2007.
|
Let @math be strictly positive and let @math be symmetric. Following de la Harpe and Jones @cite_6 we call the pair @math a . Let @math be the collection of all graphs, allowing multiple edges but no loops or circles. The of @math is the graph parameter @math defined by for @math .
|
{
"cite_N": [
"@cite_6"
],
"mid": [
"2013439277"
],
"abstract": [
"Abstract Spin models and vertex models on graphs are defined as appropriate generalizations of the Ising-Potts model of statistical mechanics. We review some of these state models and the graph functions defined by them. If a graph X represents a knot or a link L in R 3 , we describe models M for which the value Z M X at X of the graph function defined by M depends only on L and not on X ."
]
}
|
1106.6197
|
2950436751
|
We characterize the rank of edge connection matrices of partition functions of real vertex models, as the dimension of the homogeneous components of the algebra of @math -invariant tensors. Here @math is the sub- group of the real orthogonal group that stabilizes the vertex model. This answers a question of Bal 'azs Szegedy from 2007.
|
The vertex connection matrices were used by Freedman, Lov 'asz and Schrijver in @cite_4 to characterize graph parameters which are of the form @math for some positive @math and symmetric @math .
|
{
"cite_N": [
"@cite_4"
],
"mid": [
"2141382791"
],
"abstract": [
"It is shown that a graph parameter can be realized as the number of homomorphisms into a fixed (weighted) graph if and only if it satisfies two linear algebraic conditions: reflection positivity and exponential rank-connectivity. In terms of statistical physics, this can be viewed as a characterization of partition functions of vertex models."
]
}
|
1106.6197
|
2950436751
|
We characterize the rank of edge connection matrices of partition functions of real vertex models, as the dimension of the homogeneous components of the algebra of @math -invariant tensors. Here @math is the sub- group of the real orthogonal group that stabilizes the vertex model. This answers a question of Bal 'azs Szegedy from 2007.
|
Let @math positive and @math symmetric. Lov 'asz @cite_5 characterized the ranks of the vertex connection matrices of the parameter @math . Let @math . Two distinct vertices @math are called if @math for all @math .
|
{
"cite_N": [
"@cite_5"
],
"mid": [
"1975568166"
],
"abstract": [
"Connection matrices were introduced in [M. Freedman, L. Lovasz, A. Schrijver, Reflection positivity, rank connectivity, and homomorphism of graphs (MSR Tech Report # MSR-TR-2004-41) ftp: ftp.research.microsoft.com pub tr TR-2004-41.pdf], where they were used to characterize graph homomorphism functions. The goal of this note is to determine the exact rank of these matrices. The result can be rephrased in terms of the dimension of graph algebras, also introduced in the same paper. Yet another version proves that if two k-tuples of nodes behave in the same way from the point of view of graph homomorphisms, then they are equivalent under the automorphism group."
]
}
|
1106.6197
|
2950436751
|
We characterize the rank of edge connection matrices of partition functions of real vertex models, as the dimension of the homogeneous components of the algebra of @math -invariant tensors. Here @math is the sub- group of the real orthogonal group that stabilizes the vertex model. This answers a question of Bal 'azs Szegedy from 2007.
|
In @cite_2 , Schrijver introduced a different type of vertex connection matrix. It is also possible to characterize the rank of these connection matrices of partition functions of unweighted spin models (i.e. @math is the all ones vector) using our method. The characterization is similar to Theorem .
|
{
"cite_N": [
"@cite_2"
],
"mid": [
"2003078549"
],
"abstract": [
"Given a symmetric nxn matrix A, we define, for any graph G,f\"A(G):=@?@f:VG-> 1,...,n @?uv@?EGa\"@f\"(\"u\")\",\"@f\"(\"v\"). We characterize for which graph parameters f there is a complex matrix A with f=f\"A, and similarly for real A. We show that f\"A uniquely determines A, up to permuting rows and (simultaneously) columns. The proofs are based on the Nullstellensatz and some elementary invariant-theoretic techniques."
]
}
|
1106.5730
|
2951781666
|
Stochastic Gradient Descent (SGD) is a popular algorithm that can achieve state-of-the-art performance on a variety of machine learning tasks. Several researchers have recently proposed schemes to parallelize SGD, but all require performance-destroying memory locking and synchronization. This work aims to show using novel theoretical analysis, algorithms, and implementation that SGD can be implemented without any locking. We present an update scheme called HOGWILD! which allows processors access to shared memory with the possibility of overwriting each other's work. We show that when the associated optimization problem is sparse, meaning most gradient updates only modify small parts of the decision variable, then HOGWILD! achieves a nearly optimal rate of convergence. We demonstrate experimentally that HOGWILD! outperforms alternative schemes that use locking by an order of magnitude.
|
Recently, a variety of parallel schemes have been proposed in a variety of contexts. In MapReduce settings, proposed running many instances of stochastic gradient descent on different machines and averaging their output @cite_9 . Though the authors claim this method can reduce both the variance of their estimate and the overall bias, we show in our experiments that for the sorts of problems we are concerned with, this method does not outperform a serial scheme.
|
{
"cite_N": [
"@cite_9"
],
"mid": [
"2166706236"
],
"abstract": [
"With the increase in available data parallel machine learning has become an increasingly pressing problem. In this paper we present the first parallel stochastic gradient descent algorithm including a detailed analysis and experimental evidence. Unlike prior work on parallel optimization algorithms [5, 7] our variant comes with parallel acceleration guarantees and it poses no overly tight latency constraints, which might only be available in the multicore setting. Our analysis introduces a novel proof technique — contractive mappings to quantify the speed of convergence of parameter distributions to their asymptotic limits. As a side effect this answers the question of how quickly stochastic gradient descent algorithms reach the asymptotically normal regime [1, 8]."
]
}
|
1106.5730
|
2951781666
|
Stochastic Gradient Descent (SGD) is a popular algorithm that can achieve state-of-the-art performance on a variety of machine learning tasks. Several researchers have recently proposed schemes to parallelize SGD, but all require performance-destroying memory locking and synchronization. This work aims to show using novel theoretical analysis, algorithms, and implementation that SGD can be implemented without any locking. We present an update scheme called HOGWILD! which allows processors access to shared memory with the possibility of overwriting each other's work. We show that when the associated optimization problem is sparse, meaning most gradient updates only modify small parts of the decision variable, then HOGWILD! achieves a nearly optimal rate of convergence. We demonstrate experimentally that HOGWILD! outperforms alternative schemes that use locking by an order of magnitude.
|
Schemes involving the averaging of gradients via a distributed protocol have also been proposed by several authors @cite_17 @cite_16 . While these methods do achieve linear speedups, they are difficult to implement efficiently on multicore machines as they require massive communication overhead. Distributed averaging of gradients requires message passing between the cores, and the cores need to synchronize frequently in order to compute reasonable gradient averages.
|
{
"cite_N": [
"@cite_16",
"@cite_17"
],
"mid": [
"2120293976",
"2130062883"
],
"abstract": [
"The goal of decentralized optimization over a network is to optimize a global objective formed by a sum of local (possibly nonsmooth) convex functions using only local computation and communication. We develop and analyze distributed algorithms based on dual averaging of subgradients, and provide sharp bounds on their convergence rates as a function of the network size and topology. Our analysis clearly separates the convergence of the optimization algorithm itself from the effects of communication constraints arising from the network structure. We show that the number of iterations required by our algorithm scales inversely in the spectral gap of the network. The sharpness of this prediction is confirmed both by theoretical lower bounds and simulations for various networks.",
"Online prediction methods are typically presented as serial algorithms running on a single processor. However, in the age of web-scale prediction problems, it is increasingly common to encounter situations where a single processor cannot keep up with the high rate at which inputs arrive. In this work, we present the distributed mini-batch algorithm, a method of converting many serial gradient-based online prediction algorithms into distributed algorithms. We prove a regret bound for this method that is asymptotically optimal for smooth convex loss functions and stochastic inputs. Moreover, our analysis explicitly takes into account communication latencies between nodes in the distributed environment. We show how our method can be used to solve the closely-related distributed stochastic optimization problem, achieving an asymptotically linear speed-up over multiple processors. Finally, we demonstrate the merits of our approach on a web-scale online prediction problem."
]
}
|
1106.5730
|
2951781666
|
Stochastic Gradient Descent (SGD) is a popular algorithm that can achieve state-of-the-art performance on a variety of machine learning tasks. Several researchers have recently proposed schemes to parallelize SGD, but all require performance-destroying memory locking and synchronization. This work aims to show using novel theoretical analysis, algorithms, and implementation that SGD can be implemented without any locking. We present an update scheme called HOGWILD! which allows processors access to shared memory with the possibility of overwriting each other's work. We show that when the associated optimization problem is sparse, meaning most gradient updates only modify small parts of the decision variable, then HOGWILD! achieves a nearly optimal rate of convergence. We demonstrate experimentally that HOGWILD! outperforms alternative schemes that use locking by an order of magnitude.
|
The work most closely related to our own is a round-robin scheme proposed by @cite_19 . In this scheme, the processors are ordered and each update the decision variable in order. When the time required to lock memory for writing is dwarfed by the gradient computation time, this method results in a linear speedup, as the errors induced by the lag in the gradients are not too severe. However, we note that in many applications of interest in machine learning, gradient computation time is incredibly fast, and we now demonstrate that in a variety of applications, outperforms such a round-robin approach by an order of magnitude.
|
{
"cite_N": [
"@cite_19"
],
"mid": [
"2949198759"
],
"abstract": [
"Online learning algorithms have impressive convergence properties when it comes to risk minimization and convex games on very large problems. However, they are inherently sequential in their design which prevents them from taking advantage of modern multi-core architectures. In this paper we prove that online learning with delayed updates converges well, thereby facilitating parallel online learning."
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.