aid
stringlengths 9
15
| mid
stringlengths 7
10
| abstract
stringlengths 78
2.56k
| related_work
stringlengths 92
1.77k
| ref_abstract
dict |
---|---|---|---|---|
1102.3493
|
2139848116
|
In distributed storage systems built using commodity hardware, it is necessary to have data redundancy in order to ensure system reliability. In such systems, it is also often desirable to be able to quickly repair storage nodes that fail. We consider a scheme — introduced by El Rouayheb and Ramchandran — which uses combinatorial block design in order to design storage systems that enable efficient (and exact) node repair. In this work, we investigate systems where node sizes may be much larger than replication degrees, and explicitly provide algorithms for constructing these storage designs. Our designs, which are related to projective geometries, are based on the construction of bipartite cage graphs (with girth 6) and the concept of mutually-orthogonal Latin squares. Via these constructions, we can guarantee that the resulting designs require the fewest number of storage nodes for the given parameters, and can further show that these systems can be easily expanded without need for frequent reconfiguration.
|
El Rouayheb and Ramchandran @cite_17 introduce a related scheme, termed , which can perform exact repair for the minimum bandwidth regime. They then derive information theoretic bounds on the storage capacity of such systems with the given repair requirements. Although their repair model is table-based (instead of random access as in @cite_20 ), the scheme of @cite_17 has the favorable characteristics of exact repair and the uncoded storage of data chunks. Randomized constructions of such schemes are investigated in @cite_21 .
|
{
"cite_N": [
"@cite_21",
"@cite_20",
"@cite_17"
],
"mid": [
"2118917487",
"2951800112",
"2120770247"
],
"abstract": [
"We introduce an efficient family of exact regenerating codes for data storage in large-scale distributed systems. We refer to these new codes as Distributed Replication-based Exact Simple Storage (DRESS) codes. A key property of DRESS codes is their very efficient distributed and uncoded repair and growth processes that have minimum bandwidth, reads and computational overheads. This property is essential for large-scale systems with high reliability and availability requirements. DRESS codes will first encode the file using a Maximum Distance Separable (MDS) code, then place multiple replicas of the coded packets on different nodes in the system. We propose a simple and flexible randomized scheme for placing those replicas based on the balls-and-bins model. Our construction showcases the power of the probabilistic approach in constructing regenerating codes that can be efficiently repaired and grown.",
"Distributed storage systems provide reliable access to data through redundancy spread over individually unreliable nodes. Application scenarios include data centers, peer-to-peer storage systems, and storage in wireless networks. Storing data using an erasure code, in fragments spread across nodes, requires less redundancy than simple replication for the same level of reliability. However, since fragments must be periodically replaced as nodes fail, a key question is how to generate encoded fragments in a distributed way while transferring as little data as possible across the network. For an erasure coded system, a common practice to repair from a node failure is for a new node to download subsets of data stored at a number of surviving nodes, reconstruct a lost coded block using the downloaded data, and store it at the new node. We show that this procedure is sub-optimal. We introduce the notion of regenerating codes, which allow a new node to download of the stored data from the surviving nodes. We show that regenerating codes can significantly reduce the repair bandwidth. Further, we show that there is a fundamental tradeoff between storage and repair bandwidth which we theoretically characterize using flow arguments on an appropriately constructed graph. By invoking constructive results in network coding, we introduce regenerating codes that can achieve any point in this optimal tradeoff.",
"We introduce a new class of exact Minimum-Bandwidth Regenerating (MBR) codes for distributed storage systems, characterized by a low-complexity uncoded repair process that can tolerate multiple node failures. These codes consist of the concatenation of two components: an outer MDS code followed by an inner repetition code. We refer to the inner code as a Fractional Repetition code since it consists of splitting the data of each node into several packets and storing multiple replicas of each on different nodes in the system. Our model for repair is table-based, and thus, differs from the random access model adopted in the literature. We present constructions of Fractional Repetition codes based on regular graphs and Steiner systems for a large set of system parameters. The resulting codes are guaranteed to achieve the storage capacity for random access repair. The considered model motivates a new definition of capacity for distributed storage systems, that we call Fractional Repetition capacity. We provide upper bounds on this capacity, while a precise expression remains an open problem."
]
}
|
1102.3493
|
2139848116
|
In distributed storage systems built using commodity hardware, it is necessary to have data redundancy in order to ensure system reliability. In such systems, it is also often desirable to be able to quickly repair storage nodes that fail. We consider a scheme — introduced by El Rouayheb and Ramchandran — which uses combinatorial block design in order to design storage systems that enable efficient (and exact) node repair. In this work, we investigate systems where node sizes may be much larger than replication degrees, and explicitly provide algorithms for constructing these storage designs. Our designs, which are related to projective geometries, are based on the construction of bipartite cage graphs (with girth 6) and the concept of mutually-orthogonal Latin squares. Via these constructions, we can guarantee that the resulting designs require the fewest number of storage nodes for the given parameters, and can further show that these systems can be easily expanded without need for frequent reconfiguration.
|
Uncoded storage has numerous advantages for distributed storage systems. For instance, uncoded data at nodes allows for distributed computing (e.g., for cloud computing), by spreading out computation to the node(s) that contain the data to be processed. Upfal and Widgerson @cite_0 consider a method for parallel computation by randomly distributing data chunks among multiple memory devices, and derive some asymptotic performance results. In contrast, our designs are deterministic, and we are also able to guarantee the smallest possible size for our storage system. Furthermore, if uncoded data chunks are distributed among the nodes according to Steiner systems, then load-balancing of computations is always possible.
|
{
"cite_N": [
"@cite_0"
],
"mid": [
"2066015450"
],
"abstract": [
"The power of shared-memory in models of parallel computation is studied, and a novel distributed data structure that eliminates the need for shared memory without significantly increasing the run time of the parallel computation is described. More specifically, it is shown how a complete network of processors can deterministically simulate one PRAM step in O (log n (log log n ) 2 ) time when both models use n processors and the size of the PRAM's shared memory is polynomial in n . (The best previously known upper bound was the trivial O ( n )). It is established that this upper bound is nearly optimal, and it is proved that an on-line simulation of T PRAM steps by a complete network of processors requires O( T (log n log log n )) time. A simple consequence of the upper bound is that an Ultracomputer (the currently feasible general-purpose parallel machine) can simulate one step of a PRAM (the most convenient parallel model to program) in O ((log n ) 2 log log n ) steps."
]
}
|
1102.3493
|
2139848116
|
In distributed storage systems built using commodity hardware, it is necessary to have data redundancy in order to ensure system reliability. In such systems, it is also often desirable to be able to quickly repair storage nodes that fail. We consider a scheme — introduced by El Rouayheb and Ramchandran — which uses combinatorial block design in order to design storage systems that enable efficient (and exact) node repair. In this work, we investigate systems where node sizes may be much larger than replication degrees, and explicitly provide algorithms for constructing these storage designs. Our designs, which are related to projective geometries, are based on the construction of bipartite cage graphs (with girth 6) and the concept of mutually-orthogonal Latin squares. Via these constructions, we can guarantee that the resulting designs require the fewest number of storage nodes for the given parameters, and can further show that these systems can be easily expanded without need for frequent reconfiguration.
|
In addition to @cite_17 , the use of BIBDs for guaranteeing load-balanced disk repair in distributed storage systems is also considered in @cite_2 @cite_16 , for application to RAID-based disk arrays. In @cite_11 , the authors discuss how block designs may be used to lay out parity stripes in declustered parity RAID disk arrays. The block designs from our work may be helpful for distributing parity blocks in this scenario, in order to build disk arrays with good repair properties.
|
{
"cite_N": [
"@cite_16",
"@cite_2",
"@cite_11",
"@cite_17"
],
"mid": [
"",
"1546442581",
"2137982318",
"2120770247"
],
"abstract": [
"",
"Disk arrays (RAID) have been proposed as a possible approach to solving the emerging I O bottleneck problem. The performance of a RAID system when all disks are operational and the MTTF,,, (mean time to system failure) have been well studied. However, the performance of disk arrays in the presence of failed disks has not received much attention. The same techniques that provide the storage efficient redundancy of a RAID system can also result in a significant performance hit when a single disk fails. This is of importance since single disk failures are expected to be relatively frequent in a system with a large number of disks. In this paper we propose a new variation of the RAID organization that has significant advantages in both reducing the magnitude of the performance degradation when there is a single failure and can also reduce the MTTF,,,. We also discuss several strategies that can be implemented to speed the rebuild of the failed disk and thus increase the MTTF,,,. The efficacy of these strategies is shown to require the improved properties of the new RAID organization. An analysis is carried out to quantify the tradeoffs.",
"The performance of traditional RAID Level 5 arrays is, for many applications, unacceptably poor while one of its constituent disks is non-functional. This paper describes and evaluates mechanisms by which this disk array failure-recovery performance can be improved. The two key issues addressed are thedata layout, the mapping by which data and parity blocks are assigned to physical disk blocks in an array, and thereconstruction algorithm, which is the technique used to recover data that is lost when a component disk fails.",
"We introduce a new class of exact Minimum-Bandwidth Regenerating (MBR) codes for distributed storage systems, characterized by a low-complexity uncoded repair process that can tolerate multiple node failures. These codes consist of the concatenation of two components: an outer MDS code followed by an inner repetition code. We refer to the inner code as a Fractional Repetition code since it consists of splitting the data of each node into several packets and storing multiple replicas of each on different nodes in the system. Our model for repair is table-based, and thus, differs from the random access model adopted in the literature. We present constructions of Fractional Repetition codes based on regular graphs and Steiner systems for a large set of system parameters. The resulting codes are guaranteed to achieve the storage capacity for random access repair. The considered model motivates a new definition of capacity for distributed storage systems, that we call Fractional Repetition capacity. We provide upper bounds on this capacity, while a precise expression remains an open problem."
]
}
|
1102.3493
|
2139848116
|
In distributed storage systems built using commodity hardware, it is necessary to have data redundancy in order to ensure system reliability. In such systems, it is also often desirable to be able to quickly repair storage nodes that fail. We consider a scheme — introduced by El Rouayheb and Ramchandran — which uses combinatorial block design in order to design storage systems that enable efficient (and exact) node repair. In this work, we investigate systems where node sizes may be much larger than replication degrees, and explicitly provide algorithms for constructing these storage designs. Our designs, which are related to projective geometries, are based on the construction of bipartite cage graphs (with girth 6) and the concept of mutually-orthogonal Latin squares. Via these constructions, we can guarantee that the resulting designs require the fewest number of storage nodes for the given parameters, and can further show that these systems can be easily expanded without need for frequent reconfiguration.
|
Certain block designs may also be applicable to the design of error-correcting codes, particularly in the construction of geometrical codes [Sections 2.5 and 13.8] macwilliams:error_corr_codes . Graphs without short cycles have been considered in the context of Tanner graphs @cite_10 , and finite geometries in particular have been considered in the context of LDPC codes @cite_8 . Block designs and their related bipartite graphs are also considered in code design for magnetic recording applications in @cite_12 .
|
{
"cite_N": [
"@cite_10",
"@cite_12",
"@cite_8"
],
"mid": [
"2133068391",
"1647219390",
"2114869758"
],
"abstract": [
"A method is described for constructing long error-correcting codes from one or more shorter error-correcting codes, referred to as subcodes, and a bipartite graph. A graph is shown which specifies carefully chosen subsets of the digits of the new codes that must be codewords in one of the shorter subcodes. Lower bounds to the rate and the minimum distance of the new code are derived in terms of the parameters of the graph and the subeodes. Both the encoders and decoders proposed are shown to take advantage of the code's explicit decomposition into subcodes to decompose and simplify the associated computational processes. Bounds on the performance of two specific decoding algorithms are established, and the asymptotic growth of the complexity of decoding for two types of codes and decoders is analyzed. The proposed decoders are able to make effective use of probabilistic information supplied by the channel receiver, e.g., reliability information, without greatly increasing the number of computations required. It is shown that choosing a transmission order for the digits that is appropriate for the graph and the subcodes can give the code excellent burst-error correction abilities. The construction principles",
"This paper introduces a combinatorial construction of a class of iteratively decodable codes, an approach diametrically opposed to the prevalent practice of using large, random-like codes. Our codes are well-structured and, unlike random codes, can lend themselves to a very low complexity implementation. A systematic way of constructing codes based on Steiner systems and the Z sub spl nu , group is presented, and a hardware efficient encoding algorithm is proposed. A substantial performance improvement of high-rate Steiner codes over the existing schemes used in magnetic recording systems is demonstrated.",
"This paper presents a geometric approach to the construction of low-density parity-check (LDPC) codes. Four classes of LDPC codes are constructed based on the lines and points of Euclidean and projective geometries over finite fields. Codes of these four classes have good minimum distances and their Tanner (1981) graphs have girth 6. Finite-geometry LDPC codes can be decoded in various ways, ranging from low to high decoding complexity and from reasonably good to very good performance. They perform very well with iterative decoding. Furthermore, they can be put in either cyclic or quasi-cyclic form. Consequently, their encoding can be achieved in linear time and implemented with simple feedback shift registers. This advantage is not shared by other LDPC codes in general and is important in practice. Finite-geometry LDPC codes can be extended and shortened in various ways to obtain other good LDPC codes. Several techniques of extension and shortening are presented. Long extended finite-geometry LDPC codes have been constructed and they achieve a performance only a few tenths of a decibel away from the Shannon theoretical limit with iterative decoding."
]
}
|
1102.3173
|
2127681989
|
In this paper, we discuss the ability of channel codes to enhance cryptographic secrecy. Toward that end, we present the secrecy metric of degrees of freedom in an attacker's knowledge of the cryptogram, which is similar to equivocation. Using this notion of secrecy, we show how a specific practical channel coding system can be used to hide information about the ciphertext, thus increasing the difficulty of cryptographic attacks. The system setup is the wiretap channel model where transmitted data traverse through independent packet erasure channels (PECs) with public feedback for authenticated automatic repeat-request (ARQ). The code design relies on puncturing nonsystematic low-density parity-check (LDPC) codes with the intent of inflicting an eavesdropper with stopping sets in the decoder. The design amplifies errors when stopping sets occur such that a receiver must guess all the channel-erased bits correctly to avoid an error rate of one half in the ciphertext. We extend previous results on the coding scheme by giving design criteria that reduce the effectiveness of a maximum-likelihood (ML) attack to that of a message-passing (MP) attack. We further extend security analysis to models with multiple receivers and collaborative attackers. Cryptographic security is even enhanced by the system when eavesdroppers have better channel quality than legitimate receivers.
|
Our encoder makes use of fundamental practical design ideas which have been shown to offer secrecy. For example, our encoder employs nonsystematic LDPC codes in order to hide information bits and magnify coding errors. Secrecy properties of these codes have been studied in @cite_2 . We further employ intentional puncturing of encoded bits, a technique shown to offer security in @cite_1 @cite_24 . Our scheme punctures with the goal of inducing in an eavesdropper's received data. As a result, every transmitted bit is crucial for decoding. Our intent is to punish an eavesdropper for every missing piece of information. Finally, in order to distribute erasures throughout the data set, the encoder interleaves coded bits among several transmitted packets. Similar ideas of interleaving coded symbols have been used in @cite_22 @cite_34 in conjunction with wiretap codes developed in @cite_26 to offer secrecy to various systems. The works @cite_36 @cite_28 give results for ARQ and feedback wiretap systems.
|
{
"cite_N": [
"@cite_26",
"@cite_22",
"@cite_36",
"@cite_28",
"@cite_1",
"@cite_24",
"@cite_2",
"@cite_34"
],
"mid": [
"2124602541",
"2109592212",
"2123658733",
"",
"1966951829",
"",
"2962987767",
""
],
"abstract": [
"With the advent of quantum key distribution (QKD) systems, perfect (i.e., information-theoretic) security can now be achieved for distribution of a cryptographic key. QKD systems and similar protocols use classical error-correcting codes for both error correction (for the honest parties to correct errors) and privacy amplification (to make an eavesdropper fully ignorant). From a coding perspective, a good model that corresponds to such a setting is the wire tap channel introduced by Wyner in 1975. In this correspondence, we study fundamental limits and coding methods for wire tap channels. We provide an alternative view of the proof for secrecy capacity of wire tap channels and show how capacity achieving codes can be used to achieve the secrecy capacity for any wiretap channel. We also consider binary erasure channel and binary symmetric channel special cases for the wiretap channel and propose specific practical codes. In some cases our designs achieve the secrecy capacity and in others the codes provide security at rates below secrecy capacity. For the special case of a noiseless main channel and binary erasure channel, we consider encoder and decoder design for codes achieving secrecy on the wiretap channel; we show that it is possible to construct linear-time decodable secrecy codes based on low-density parity-check (LDPC) codes that achieve secrecy.",
"We propose a method that provides information-theoretic security for client-server communications. By introducing an appropriate encoding scheme, we show how a client-server architecture under active attacks can be modeled as a binary-erasure wiretap channel. The secrecy capacity of the equivalent wiretap channel is then used as a metric to optimize the architecture and limit the impact of the attacks. Upper and lower bounds of the optimal secrecy capacity are derived and analyzed. While still mostly of theoretical interest, our analysis sheds some light on the practical design of resistant and secure client-server architectures.",
"In this work, the critical role of noisy feedback in enhancing the secrecy capacity of the wiretap channel is established. Unlike previous works, where a noiseless public discussion channel is used for feedback, the feed-forward and feedback signals share the same noisy channel in the present model. Quite interestingly, this noisy feedback model is shown to be more advantageous in the current setting. More specifically, the discrete memoryless modulo-additive channel with a full-duplex destination node is considered first, and it is shown that the judicious use of feedback increases the secrecy capacity to the capacity of the source-destination channel in the absence of the wiretapper. In the achievability scheme, the feedback signal corresponds to a private key, known only to the destination. In the half-duplex scheme, a novel feedback technique that always achieves a positive perfect secrecy rate (even when the source-wiretapper channel is less noisy than the source-destination channel) is proposed. These results hinge on the modulo-additive property of the channel, which is exploited by the destination to perform encryption over the channel without revealing its key to the source. Finally, this scheme is extended to the continuous real valued modulo-Lambda channel where it is shown that the secrecy capacity with feedback is also equal to the capacity in the absence of the wiretapper.",
"",
"A coding scheme for the Gaussian wiretap channel based on low-density parity-check (LDPC) codes is presented. The messages are transmitted over punctured bits to hide data from eavesdroppers. It is shown by means of density evolution that the BER of an eavesdropper, who operates below the code's SNR threshold and has the ability to use a bitwise MAP decoder, increases to 0.5 within a few dB. It is shown how asymptotically optimized LDPC codes can be designed with differential evolution where the goal is to achieve high reliability between friendly parties and security against a passive eavesdropper while keeping the security gap as small as possible. The proposed coding scheme is also efficiently encodable in almost linear time.",
"",
"This paper is a first study on the usage of non-systematic codes based on scrambling matrices for physical layer security. The chance of implementing transmission security at the physical layer is known since many years, but it is now gaining an increasing interest due to its several possible applications. It has been shown that channel coding techniques can be effectively exploited for designing physical layer security schemes, in such a way that an unauthorized receiver, experiencing a channel different from that of the authorized receiver, is not able to gather any information. Recently, it has been proposed to exploit puncturing techniques in order to reduce the security gap between the authorized and unauthorized channels. In this paper, we show that the security gap can be further reduced by using non-systematic codes, able to scramble information bits within the transmitted codeword.",
""
]
}
|
1102.1985
|
1937101755
|
Theoretical progress in understanding the dynamics of spreading processes on graphs suggests the existence of an epidemic threshold below which no epidemics form and above which epidemics spread to a significant fraction of the graph. We have observed information cascades on the social media site Digg that spread fast enough for one initial spreader to infect hundreds of people, yet end up affecting only 0.1 of the entire network. We find that two effects, previously studied in isolation, combine cooperatively to drastically limit the final size of cascades on Digg. First, because of the highly clustered structure of the Digg network, most people who are aware of a story have been exposed to it via multiple friends. This structure lowers the epidemic threshold while moderately slowing the overall growth of cascades. In addition, we find that the mechanism for social contagion on Digg points to a fundamental difference between information spread and other contagion processes: despite multiple opportunities for infection within a social group, people are less likely to become spreaders of information with repeated exposure. The consequences of this mechanism become more pronounced for more clustered graphs. Ultimately, this effect severely curtails the size of social epidemics on Digg.
|
Another modified spreading process for social contagion that has been considered is the effect of adding stiflers'' @cite_10 . Similar to FSM, stiflers will not spread a story (rumor) no matter how many times they encounter it. Stiflers, however, are not merely desensitized to multiple exposures, they may actively convert spreaders or susceptible nodes into stiflers. This complicated dynamic can lead to drastic changes, e.g., the elimination of the epidemic threshold. In Digg, a fan who does not vote on a story after multiple exposures, does not actively persuade the exposed and susceptible fans not to vote on a story. Hence, this model does not apply to the process of information diffusion on Digg.
|
{
"cite_N": [
"@cite_10"
],
"mid": [
"1544632947"
],
"abstract": [
"The availability of large data sets have allowed researchers to uncover complex properties such as large scale fluctuations and heterogeneities in many networks which have lead to the breakdown of standard theoretical frameworks and models. Until recently these systems were considered as haphazard sets of points and connections. Recent advances have generated a vigorous research effort in understanding the effect of complex connectivity patterns on dynamical phenomena. For example, a vast number of everyday systems, from the brain to ecosystems, power grids and the Internet, can be represented as large complex networks. This new and recent account presents a comprehensive explanation of these effects."
]
}
|
1102.1985
|
1937101755
|
Theoretical progress in understanding the dynamics of spreading processes on graphs suggests the existence of an epidemic threshold below which no epidemics form and above which epidemics spread to a significant fraction of the graph. We have observed information cascades on the social media site Digg that spread fast enough for one initial spreader to infect hundreds of people, yet end up affecting only 0.1 of the entire network. We find that two effects, previously studied in isolation, combine cooperatively to drastically limit the final size of cascades on Digg. First, because of the highly clustered structure of the Digg network, most people who are aware of a story have been exposed to it via multiple friends. This structure lowers the epidemic threshold while moderately slowing the overall growth of cascades. In addition, we find that the mechanism for social contagion on Digg points to a fundamental difference between information spread and other contagion processes: despite multiple opportunities for infection within a social group, people are less likely to become spreaders of information with repeated exposure. The consequences of this mechanism become more pronounced for more clustered graphs. Ultimately, this effect severely curtails the size of social epidemics on Digg.
|
The friend saturating model we have used to describe cascades on Digg is a special case of a broader class of models called decreasing cascade models'' @cite_21 . Several works have observed similar diminishing returns from friends in social networks. @cite_30 analyzed the usefulness of product recommendations on Amazon.com. They rarely found that anyone received more than a handful of recommendations for any product, and the marginal benefit of multiple recommendations, while product dependent, was typically sublinear (i.e. two recommendations did not make someone twice as likely to buy as one recommendation). Link formation was studied in @cite_31 , where they also found diminishing returns in the probability of befriending someone with whom one shares @math mutual friends, with saturation occurring around @math . The probability of joining a group that @math friends have joined was studied in @cite_1 , with saturation occurring for @math around 10-20.
|
{
"cite_N": [
"@cite_30",
"@cite_31",
"@cite_21",
"@cite_1"
],
"mid": [
"1994473607",
"2049607688",
"",
"2432978112"
],
"abstract": [
"We present an analysis of a person-to-person recommendation network, consisting of 4 million people who made 16 million recommendations on half a million products. We observe the propagation of recommendations and the cascade sizes, which we explain by a simple stochastic model. We analyze how user behavior varies within user communities defined by a recommendation network. Product purchases follow a ‘long tail’ where a significant share of purchases belongs to rarely sold items. We establish how the recommendation network grows over time and how effective it is from the viewpoint of the sender and receiver of the recommendations. While on average recommendations are not very effective at inducing purchases and do not spread very far, we present a model that successfully identifies communities, product, and pricing categories for which viral marketing seems to be very effective.",
"Social networks evolve over time, driven by the shared activities and affiliations of their members, by similarity of individuals' attributes, and by the closure of short network cycles. We analyzed a dynamic social network comprising 43,553 students, faculty, and staff at a large university, in which interactions between individuals are inferred from time-stamped e-mail headers recorded over one academic year and are matched with affiliations and attributes. We found that network evolution is dominated by a combination of effects arising from network topology itself and the organizational structure in which the network is embedded. In the absence of global perturbations, average network properties appear to approach an equilibrium state, whereas individual properties are unstable.",
"",
"The processes by which communities come together, attract new members, and develop over time is a central research issue in the social sciences - political movements, professional organizations, and religious denominations all provide fundamental examples of such communities. In the digital domain, on-line groups are becoming increasingly prominent due to the growth of community and social networking sites such as MySpace and LiveJournal. However, the challenge of collecting and analyzing large-scale time-resolved data on social groups and communities has left most basic questions about the evolution of such groups largely unresolved: what are the structural features that influence whether individuals will join communities, which communities will grow rapidly, and how do the overlaps among pairs of communities change over time.Here we address these questions using two large sources of data: friendship links and community membership on LiveJournal, and co-authorship and conference publications in DBLP. Both of these datasets provide explicit user-defined communities, where conferences serve as proxies for communities in DBLP. We study how the evolution of these communities relates to properties such as the structure of the underlying social networks. We find that the propensity of individuals to join communities, and of communities to grow rapidly, depends in subtle ways on the underlying network structure. For example, the tendency of an individual to join a community is influenced not just by the number of friends he or she has within the community, but also crucially by how those friends are connected to one another. We use decision-tree techniques to identify the most significant structural determinants of these properties. We also develop a novel methodology for measuring movement of individuals between communities, and show how such movements are closely aligned with changes in the topics of interest within the communities."
]
}
|
1102.1985
|
1937101755
|
Theoretical progress in understanding the dynamics of spreading processes on graphs suggests the existence of an epidemic threshold below which no epidemics form and above which epidemics spread to a significant fraction of the graph. We have observed information cascades on the social media site Digg that spread fast enough for one initial spreader to infect hundreds of people, yet end up affecting only 0.1 of the entire network. We find that two effects, previously studied in isolation, combine cooperatively to drastically limit the final size of cascades on Digg. First, because of the highly clustered structure of the Digg network, most people who are aware of a story have been exposed to it via multiple friends. This structure lowers the epidemic threshold while moderately slowing the overall growth of cascades. In addition, we find that the mechanism for social contagion on Digg points to a fundamental difference between information spread and other contagion processes: despite multiple opportunities for infection within a social group, people are less likely to become spreaders of information with repeated exposure. The consequences of this mechanism become more pronounced for more clustered graphs. Ultimately, this effect severely curtails the size of social epidemics on Digg.
|
@cite_22 modeled viral email cascades using branching processes like the Galton-Watson process and the Bellman-Harris process. They argued that the topology of the underlying social network is irrelevant in the prediction of cascade size. This may hold true in the tree-like cascades studied by the authors. However as stated previously, in Digg, dynamics of information propagation is not tree-like and these models do not hold. Future work includes studying the impact of activity patterns on the information diffusion dynamics.
|
{
"cite_N": [
"@cite_22"
],
"mid": [
"2109207218"
],
"abstract": [
"We study the impact of human activity patterns on information diffusion. To this end we ran a viral email experiment involving 31183 individuals in which we were able to track a specific piece of information through the social network. We found that, contrary to traditional models, information travels at an unexpectedly slow pace. By using a branching model which accurately describes the experiment, we show that the large heterogeneity found in the response time is responsible for the slow dynamics of information at the collective level. Given the generality of our result, we discuss the important implications of this finding while modeling human dynamical collective phenomena."
]
}
|
1102.1273
|
2950950577
|
We give a memoryless scale-invariant randomized algorithm for the Buffer Management with Bounded Delay problem that is e (e-1)-competitive against an adaptive adversary, together with better performance guarantees for many restricted variants, including the s-bounded instances. In particular, our algorithm attains the optimum competitive ratio of 4 3 on 2-bounded instances. Both the algorithm and its analysis are applicable to a more general problem, called Collecting Items, in which only the relative order between packets' deadlines is known. Our algorithm is the optimal randomized memoryless algorithm against adaptive adversary for that problem in a strong sense. While some of provided upper bounds were already known, in general, they were attained by several different algorithms.
|
Bienkowski et al. @cite_5 studied a generalization of buffer management with bounded delay, in which the algorithm knows only the relative order between packets' deadlines rather than their exact values; after Bienkowski et al. we dub the generalized problem . Their paper focuses on deterministic algorithms but it does provide certain lower bounds for memoryless algorithms, matched by our algorithm. See Appendix for details.
|
{
"cite_N": [
"@cite_5"
],
"mid": [
"1974319951"
],
"abstract": [
"We consider online competitive algorithms for the problem of collecting weighted items from a dynamic queue S. The content of S varies over time. An update to S can occur between any two consecutive time steps, and it consists in deleting any number of items at the front of S and inserting other items into arbitrary locations in S. At each time step we are allowed to collect one item in S. The objective is to maximize the total weight of collected items. This is a generalization of bounded-delay packet scheduling (also known as buffer management). We present several upper and lower bounds on the competitive ratio for the general case and for some restricted variants of this problem."
]
}
|
1102.1111
|
2086855726
|
Collaborative tagging has emerged as a popular and effective method for organizing and describing pages on the Web. We present Treelicious, a system that allows hierarchical navigation of tagged web pages. Our system enriches the navigational capabilities of standard tagging systems, which typically exploit only popularity and co-occurrence data. We describe a prototype that leverages the Wikipedia category structure to allow a user to semantically navigate pages from the Delicious social bookmarking service. In our system a user can perform an ordinary keyword search and browse relevant pages but is also given the ability to broaden the search to more general topics and narrow it to more specific topics. We show that Treelicious indeed provides an intuitive framework that allows for improved and effective discovery of knowledge.
|
Heymann and Garcia-Molina @cite_4 convert a large corpus of tagged objects into a hierarchical structure of tags using purely statistical techniques. They achieve this by leveraging concepts of generality and similarity users implicitly embed in their annotations and applying graph centrality algorithms to build a tree of tags. Though the tree generated is surprisingly accurate in some places, it breaks down to a simple similarity graph in others and isn't semantically sound enough for reliable hierarchical navigation.
|
{
"cite_N": [
"@cite_4"
],
"mid": [
"2161173831"
],
"abstract": [
"Collaborative tagging systems---systems where many casual users annotate objects with free-form strings (tags) of their choosing---have recently emerged as a powerful way to label and organize large collections of data. During our recent investigation into these types of systems, we discovered a simple but remarkably effective algorithm for converting a large corpus of tags annotating objects in a tagging system into a navigable hierarchical taxonomy of tags. We first discuss the algorithm and then present a preliminary model to explain why it is so effective in these types of systems."
]
}
|
1102.1111
|
2086855726
|
Collaborative tagging has emerged as a popular and effective method for organizing and describing pages on the Web. We present Treelicious, a system that allows hierarchical navigation of tagged web pages. Our system enriches the navigational capabilities of standard tagging systems, which typically exploit only popularity and co-occurrence data. We describe a prototype that leverages the Wikipedia category structure to allow a user to semantically navigate pages from the Delicious social bookmarking service. In our system a user can perform an ordinary keyword search and browse relevant pages but is also given the ability to broaden the search to more general topics and narrow it to more specific topics. We show that Treelicious indeed provides an intuitive framework that allows for improved and effective discovery of knowledge.
|
Instead of generating a hierarchy from the tags themselves, @cite_6 impose hierarchy on a set of tags using the WordNet http: wordnet.princeton.edu lexical database. In their system, when a tag is used to perform a search on Delicious, they gather a sample of tags that have been applied to each of the result pages into one big set of tags. They then pipe these tags through a module that utilizes hypernym and hyponym hierarchy information present in WordNet to build a semantic tree of tags related to the search tag. They also prune out WordNet nodes that don't appear in the tag set to compress their tree. Because of this, their hierarchy is bounded by the search tag. Though their results are nicely hierarchical, they lack a sense of completeness having only been seeded by the co-occurring tags in the local results. There are also problems with the mapping from Delicious tags to WordNet words due to the difference in the level of formality of language (e.g. nyc versus New York City) and the prevalence of recently introduced terminology in Delicious (e.g. AJAX, Obama, Harry Potter).
|
{
"cite_N": [
"@cite_6"
],
"mid": [
"158397033"
],
"abstract": [
"As the volume of information in the read-write Web increases rapidly, folksonomies are becoming a widely used tool to organize and categorize resources in a bottom up, flat and inclusive way. However, due to their very structure, they show some drawbacks; in particular the lack of hierarchy bears some limitations in the possibilities of searching and browsing. In this paper we investigate a new approach, based on the idea of integrating an ontology in the navigation interface of a folksonomy, and we describe an application that filters del.icio.us keywords through the WordNet hierarchy of concepts, to enrich the possibilities of navigation."
]
}
|
1102.0930
|
1747464344
|
For discovering the new URI of a missing web page, lexical signatures, which consist of a small number of words chosen to represent the "aboutness" of a page, have been previously proposed. However, prior methods relied on computing the lexical signature before the page was lost, or using cached or archived versions of the page to calculate a lexical signature. We demonstrate a system of constructing a lexical signature for a page from its link neighborhood, that is the "backlinks", or pages that link to the missing page. After testing various methods, we show that one can construct a lexical signature for a missing web page using only ten backlink pages. Further, we show that only the first level of backlinks are useful in this effort. The text that the backlinks use to point to the missing page is used as input for the creation of a four-word lexical signature. That lexical signature is shown to successfully find the target URI in over half of the test cases.
|
Phelps and Wilensky @cite_6 proposed calculating the lexical signature of a target page, and embedding that lexical signature into the link URIs to make the referenced page easier to find. Their method relied on a five-term lexical signature being calculated at the time the link was created, and included in the link URI. This placed the burden of preparing for future recovery on the content creator or administrator; if the creator did not calculate the lexical signature in advance, the user would be unable to use this method to attempt to rediscover the page. In addition, web browsers would have to be modified to use the lexical signature in the URI to attempt to rediscover the page. This meant that both web servers and browsers would have to implement this method for it to be usable.
|
{
"cite_N": [
"@cite_6"
],
"mid": [
"1622499492"
],
"abstract": [
"We propose robust hyperlinks as a solution to the problem of broken hyperlinks. A robust hyperlink is a URL augmented with a small \"signature\", computed from the referenced document. The signature can be submitted as a query to web search engines to locate the document. It turns out that very small signatures are sufficient to readily locate individual documents out of the many millions on the web. Robust hyperlinks exhibit a number of desirable qualities: They can be computed and exploited automatically, are small and cheap to compute (so that it is practical to make all hyperlinks robust), do not require new server or infrastructure support, can be rolled out reasonably well in the existing URL syntax, can be used to automatically retrofit existing links to make them robust, and are easy to understand. In particular, one can start using robust hyperlinks now, as servers and web pages are mostly compatible as is, while clients can increase their support in the future. Robust hyperlinks are one example of using the web to bootstrap new features onto itself. PLEASE NOTE: a hypertext version of this paper is available at http: HTTP.CS.Berkeley.EDU wilensky robust-hyperlinks.html"
]
}
|
1102.0930
|
1747464344
|
For discovering the new URI of a missing web page, lexical signatures, which consist of a small number of words chosen to represent the "aboutness" of a page, have been previously proposed. However, prior methods relied on computing the lexical signature before the page was lost, or using cached or archived versions of the page to calculate a lexical signature. We demonstrate a system of constructing a lexical signature for a page from its link neighborhood, that is the "backlinks", or pages that link to the missing page. After testing various methods, we show that one can construct a lexical signature for a missing web page using only ten backlink pages. Further, we show that only the first level of backlinks are useful in this effort. The text that the backlinks use to point to the missing page is used as input for the creation of a four-word lexical signature. That lexical signature is shown to successfully find the target URI in over half of the test cases.
|
Park, Pennock, Giles, and Krovetz @cite_10 expanded on the work of Phelps and Wilensky by analyzing eight different formulas for calculating a lexical signature. Phelps and Wilensky's method involved dividing the number of times a term appeared in a document, known as its term frequency (TF), by the number of times that term appeared in all documents in the corpus, known as the document frequency (DF). This fraction is known as term frequency-inverse document frequency (TFIDF). test Phelps and Wilensky's original TFIDF variant, a simpler TFIDF, plain TF, and plain DF as so-called basic LSs''. In addition they tested several hybrid'' formulas in which some of the LS terms were calculated with one formula, and some of the terms with another formula, as potential ways to find relevant documents when the original document could not be found. They found that TFIDF was the best among the basic'' formulas, though some of the hybrid formulas performed better in certain use cases. They also used a five-term LS for lack of an empirical study on LS size, and noted that the effect of LS size was a topic for future research.
|
{
"cite_N": [
"@cite_10"
],
"mid": [
"2117629987"
],
"abstract": [
"A lexical signature (LS) consisting of several key words from a Web document is often sufficient information for finding the document later, even if its URL has changed. We conduct a large-scale empirical study of nine methods for generating lexical signatures, including Phelps and Wilensky's original proposal (PW), seven of our own static variations, and one new dynamic method. We examine their performance on the Web over a 10-month period, and on a TREC data set, evaluating their ability to both (1) uniquely identify the original (possibly modified) document, and (2) locate other relevant documents if the original is lost. Lexical signatures chosen to minimize document frequency (DF) are good at unique identification but poor at finding relevant documents. PW works well on the relatively small TREC data set, but acts almost identically to DF on the Web, which contains billions of documents. Term-frequency-based lexical signatures (TF) are very easy to compute and often perform well, but are highly dependent on the ranking system of the search engine used. The term-frequency inverse-document-frequency- (TFIDF-) based method and hybrid methods (which combine DF with TF or TFIDF) seem to be the most promising candidates among static methods for generating effective lexical signatures. We propose a dynamic LS generator called Test & Select (TS) to mitigate LS conflict. TS outperforms all eight static methods in terms of both extracting the desired document and finding relevant information, over three different search engines. All LS methods show significant performance degradation as documents in the corpus are edited."
]
}
|
1102.0930
|
1747464344
|
For discovering the new URI of a missing web page, lexical signatures, which consist of a small number of words chosen to represent the "aboutness" of a page, have been previously proposed. However, prior methods relied on computing the lexical signature before the page was lost, or using cached or archived versions of the page to calculate a lexical signature. We demonstrate a system of constructing a lexical signature for a page from its link neighborhood, that is the "backlinks", or pages that link to the missing page. After testing various methods, we show that one can construct a lexical signature for a missing web page using only ten backlink pages. Further, we show that only the first level of backlinks are useful in this effort. The text that the backlinks use to point to the missing page is used as input for the creation of a four-word lexical signature. That lexical signature is shown to successfully find the target URI in over half of the test cases.
|
Klein and Nelson @cite_0 proposed a method to use LSs calculated from archived or cached versions of a page to calculate a lexical signature. They showed that a five-term or a seven-term lexical signature would produce the best results. A seven-term LS did best at finding the URI in the first result, whereas a five-term LS performed best at finding the URI somewhere in the first page.
|
{
"cite_N": [
"@cite_0"
],
"mid": [
"1518577300"
],
"abstract": [
"A lexical signature (LS) is a small set of terms derived from a document that capture the \"aboutness\" of that document. A LS generated from a web page can be used to discover that page at a different URL as well as to find relevant pages in the Internet. From a set of randomly selected URLs we took all their copies from the Internet Archive between 1996 and 2007 and generated their LSs. We conducted an overlap analysis of terms in all LSs and found only small overlaps in the early years (1996 i¾? 2000) but increasing numbers in the more recent past (from 2003 on). We measured the performance of all LSs in dependence of the number of terms they consist of. We found that LSs created more recently perform better than early LSs created between 1996 and 2000. All LSs created from year 2000 on show a similar pattern in their performance curve. Our results show that 5-, 6- and 7-term LSs perform best with returning the URLs of interest in the top ten of the result set. In about 50 of all cases these URLs are returned as the number one result and in 30 of all times we considered the URLs as not discoved."
]
}
|
1102.0930
|
1747464344
|
For discovering the new URI of a missing web page, lexical signatures, which consist of a small number of words chosen to represent the "aboutness" of a page, have been previously proposed. However, prior methods relied on computing the lexical signature before the page was lost, or using cached or archived versions of the page to calculate a lexical signature. We demonstrate a system of constructing a lexical signature for a page from its link neighborhood, that is the "backlinks", or pages that link to the missing page. After testing various methods, we show that one can construct a lexical signature for a missing web page using only ten backlink pages. Further, we show that only the first level of backlinks are useful in this effort. The text that the backlinks use to point to the missing page is used as input for the creation of a four-word lexical signature. That lexical signature is shown to successfully find the target URI in over half of the test cases.
|
Henzinger, Chang, Milch, and Brin @cite_8 used lexical signatures derived from newscast transcripts to find articles related to the newscast in real-time. Their input, rather than being a static web page, was a constantly-flowing stream of text from the transcript. Their method took into account the temporal locality of terms, that is words that were spoken close together in the broadcast, to attempt to compute LSs that would be relevant to a single story each, rather than spanning across subsequent stories. Their observations showed that, far from the five-term LSs used in prior studies, a two-term lexical signature worked best in this application.
|
{
"cite_N": [
"@cite_8"
],
"mid": [
"1985004975"
],
"abstract": [
"We present Opal, a light-weight framework for interactively locating missing web pages (http status code 404). Opal is an example of \"in vivo\" preservation: harnessing the collective behavior of web archives, commercial search engines, and research projects for the purpose of preservation. Opal servers learn from their experiences and are able to share their knowledge with other Opal servers by mutual harvesting using the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH). Using cached copies that can be found on the web, Opal creates lexical signatures which are then used to search for similar versions of the web page. We present the architecture of the Opal framework, discuss a reference implementation of the framework, and present a quantitative analysis of the framework that indicates that Opal could be effectively deployed."
]
}
|
1102.0930
|
1747464344
|
For discovering the new URI of a missing web page, lexical signatures, which consist of a small number of words chosen to represent the "aboutness" of a page, have been previously proposed. However, prior methods relied on computing the lexical signature before the page was lost, or using cached or archived versions of the page to calculate a lexical signature. We demonstrate a system of constructing a lexical signature for a page from its link neighborhood, that is the "backlinks", or pages that link to the missing page. After testing various methods, we show that one can construct a lexical signature for a missing web page using only ten backlink pages. Further, we show that only the first level of backlinks are useful in this effort. The text that the backlinks use to point to the missing page is used as input for the creation of a four-word lexical signature. That lexical signature is shown to successfully find the target URI in over half of the test cases.
|
@cite_3 showed the effectiveness of anchor text in describing a resource. They demonstrated that for a specific user need, which they called the site-finding problem, anchor text of backlinks provided a more effective way of ranking documents than did the content of the target page itself.
|
{
"cite_N": [
"@cite_3"
],
"mid": [
"2164052363"
],
"abstract": [
"Link-based ranking methods have been described in the literature and applied in commercial Web search engines. However, according to recent TREC experiments, they are no better than traditional content-based methods. We conduct a different type of experiment, in which the task is to find the main entry point of a specific Web site. In our experiments, ranking based on link anchor text is twice as effective as ranking based on document content, even though both methods used the same BM25 formula. We obtained these results using two sets of 100 queries on a 18.5 million document set and another set of 100 on a 0.4 million document set. This site finding effectiveness begins to explain why many search engines have adopted link methods. It also opens a rich new area for effectiveness improvement, where traditional methods fail."
]
}
|
1102.0930
|
1747464344
|
For discovering the new URI of a missing web page, lexical signatures, which consist of a small number of words chosen to represent the "aboutness" of a page, have been previously proposed. However, prior methods relied on computing the lexical signature before the page was lost, or using cached or archived versions of the page to calculate a lexical signature. We demonstrate a system of constructing a lexical signature for a page from its link neighborhood, that is the "backlinks", or pages that link to the missing page. After testing various methods, we show that one can construct a lexical signature for a missing web page using only ten backlink pages. Further, we show that only the first level of backlinks are useful in this effort. The text that the backlinks use to point to the missing page is used as input for the creation of a four-word lexical signature. That lexical signature is shown to successfully find the target URI in over half of the test cases.
|
Sugiyama, Hatano, Yoshikawa, and Uemura @cite_1 proposed enhancing the feature vector of a web page by including its link neighborhood. That is, they proposed that a search engine could more accurately describe the contents of a page by including information from both in-links (backlinks) and out-links. They tested up to third-level in- and out-links, and found that only links up to the second level were helpful. In some of the methods they tested, only the first-level links were helpful.
|
{
"cite_N": [
"@cite_1"
],
"mid": [
"2136016364"
],
"abstract": [
"In IR (information retrieval) systems based on the vector space model, the TF-IDF scheme is widely used to characterize documents. However, in the case of documents with hyperlink structures such as Web pages, it is necessary to develop a technique for representing the contents of Web pages more accurately by exploiting the contents of their hyperlinked neighboring pages. In this paper, we first propose several approaches to refining the TF-IDF scheme for a target Web page by using the contents of its hyperlinked neighboring pages, and then compare the retrieval accuracy of our proposed approaches. Experimental results show that, generally, more accurate feature vectors of a target Web page can be generated in the case of utilizing the contents of its hyperlinked neighboring pages at levels up to second in the backward direction from the target page."
]
}
|
1102.0930
|
1747464344
|
For discovering the new URI of a missing web page, lexical signatures, which consist of a small number of words chosen to represent the "aboutness" of a page, have been previously proposed. However, prior methods relied on computing the lexical signature before the page was lost, or using cached or archived versions of the page to calculate a lexical signature. We demonstrate a system of constructing a lexical signature for a page from its link neighborhood, that is the "backlinks", or pages that link to the missing page. After testing various methods, we show that one can construct a lexical signature for a missing web page using only ten backlink pages. Further, we show that only the first level of backlinks are useful in this effort. The text that the backlinks use to point to the missing page is used as input for the creation of a four-word lexical signature. That lexical signature is shown to successfully find the target URI in over half of the test cases.
|
@cite_12 explored the correlation between anchor text and page titles. They showed that the collective nature of anchor texts, since they are created by many people, adds a significant value. Anchor texts are created by a similar thought process to queries, and as such will use similar words to describe a topic. Since links can be made by many authors, they will use their own word preferences, which means that anchors can provide synonyms that the original page's author might not use. They even showed that anchor texts, since they are made by different authors and might be written in different languages, might be used to provide a bilingual corpus for machine learning of natural language translation.
|
{
"cite_N": [
"@cite_12"
],
"mid": [
"155984473"
],
"abstract": [
"In the Navigational Retrieval Subtask 2 (Navi-2) at the NTCIR-5 WEB Task, a hypothetical user knows a specific item (e.g., a product, company, and person) and requires to find one or more representative Web pages related to the item. This paper describes our system participated in the Navi-2 subtask and reports the evaluation results of our system. Our system uses three types of information obtained from the NTCIR5 Web collection: page content, anchor text, and link structure. Specifically, we exploit anchor text in two perspectives. First, we compare the effectiveness of two different methods to model anchor text. Second, we use anchor text to extract synonyms for query expansion purposes. We show the effectiveness of our system experimentally."
]
}
|
1102.1746
|
2103320153
|
The Parikh vector p(s) of a string s over a finite ordered alphabet Σ = a1, …, aσ is defined as the vector of multiplicities of the characters, p(s) = (p1, …, pσ), where pi = | j | sj = ai |. Parikh vector q occurs in s if s has a substring t with p(t) = q. The problem of searching for a query q in a text s of length n can be solved simply and worst-case optimally with a sliding window approach in O(n) time. We present two novel algorithms for the case where the text is fixed and many queries arrive over time. The first algorithm only decides whether a given Parikh vector appears in a binary text. It uses a linear size data structure and decides each query in O(1) time. The preprocessing can be done trivially in Θ(n2) time. The second algorithm finds all occurrences of a given Parikh vector in a text over an arbitrary alphabet of size σ ≥ 2 and has sub-linear expected time complexity. More precisely, we present two variants of the algorithm, both using an O(n) size data structure, each of which can be constructed in O(n) time. The first solution is very simple and easy to implement and leads to an expected query time of , where m = ∑i qi is the length of a string with Parikh vector q. The second uses wavelet trees and improves the expected runtime to , i.e., by a factor of log m.
|
Jumbled pattern matching is a special case of approximate pattern matching. It has been used as a filtering step in approximate pattern matching algorithms @cite_6 , but rarely considered in its own right.
|
{
"cite_N": [
"@cite_6"
],
"mid": [
"2012739991"
],
"abstract": [
"Experimental comparisons of the running time of approximate string matching algorithms for the k differences problem are presented. Given a pattern string, a text string, and an integer k, the task is to find all approximate occurrences of the pattern in the text with at most k differences (insertions, deletions, changes). We consider seven algorithms based on different approaches including dynamic programming, Boyer-Moore string matching, suffix automata, and the distribution of characters. It turns out that none of the algorithms is the best for all values of the problem parameters, and the speed differences between the methods can be considerable."
]
}
|
1102.1746
|
2103320153
|
The Parikh vector p(s) of a string s over a finite ordered alphabet Σ = a1, …, aσ is defined as the vector of multiplicities of the characters, p(s) = (p1, …, pσ), where pi = | j | sj = ai |. Parikh vector q occurs in s if s has a substring t with p(t) = q. The problem of searching for a query q in a text s of length n can be solved simply and worst-case optimally with a sliding window approach in O(n) time. We present two novel algorithms for the case where the text is fixed and many queries arrive over time. The first algorithm only decides whether a given Parikh vector appears in a binary text. It uses a linear size data structure and decides each query in O(1) time. The preprocessing can be done trivially in Θ(n2) time. The second algorithm finds all occurrences of a given Parikh vector in a text over an arbitrary alphabet of size σ ≥ 2 and has sub-linear expected time complexity. More precisely, we present two variants of the algorithm, both using an O(n) size data structure, each of which can be constructed in O(n) time. The first solution is very simple and easy to implement and leads to an expected query time of , where m = ∑i qi is the length of a string with Parikh vector q. The second uses wavelet trees and improves the expected runtime to , i.e., by a factor of log m.
|
The authors of @cite_15 present an algorithm for finding all occurrences of a Parikh vector in a runlength encoded text. The algorithm's time complexity is @math , where @math is the length of the runlength encoding of @math . Obviously, if the string is not runlength encoded, a preprocessing phase of time @math has to be added. However, this may still be feasible if many queries are expected. To the best of our knowledge, this is the only algorithm that has been presented for the problem we treat here.
|
{
"cite_N": [
"@cite_15"
],
"mid": [
"2012621171"
],
"abstract": [
"The goal of scaled permuted string matching is to find all occurrences of a pattern in a text, in all possible scales and permutations. Given a text of length n and a pattern of length m we present an O(n) algorithm."
]
}
|
1102.1746
|
2103320153
|
The Parikh vector p(s) of a string s over a finite ordered alphabet Σ = a1, …, aσ is defined as the vector of multiplicities of the characters, p(s) = (p1, …, pσ), where pi = | j | sj = ai |. Parikh vector q occurs in s if s has a substring t with p(t) = q. The problem of searching for a query q in a text s of length n can be solved simply and worst-case optimally with a sliding window approach in O(n) time. We present two novel algorithms for the case where the text is fixed and many queries arrive over time. The first algorithm only decides whether a given Parikh vector appears in a binary text. It uses a linear size data structure and decides each query in O(1) time. The preprocessing can be done trivially in Θ(n2) time. The second algorithm finds all occurrences of a given Parikh vector in a text over an arbitrary alphabet of size σ ≥ 2 and has sub-linear expected time complexity. More precisely, we present two variants of the algorithm, both using an O(n) size data structure, each of which can be constructed in O(n) time. The first solution is very simple and easy to implement and leads to an expected query time of , where m = ∑i qi is the length of a string with Parikh vector q. The second uses wavelet trees and improves the expected runtime to , i.e., by a factor of log m.
|
An efficient algorithm for computing all Parikh fingerprints of substrings of a given string was developed in @cite_20 . Parikh fingerprints are Boolean vectors where the @math 'th entry is @math if and only if @math appears in the string. The algorithm involves storing a data point for each Parikh fingerprint, of which there are at most @math many. This approach was adapted in @cite_16 for Parikh vectors and applied to identifying all repeated Parikh vectors within a given length range; using it to search for queries of arbitrary length would imply using @math space, where @math denotes the number of different Parikh vectors of substrings of @math . This is not desirable, since, for arbitrary alphabets, there are non-trivial strings of any length with quadratic @math @cite_22 .
|
{
"cite_N": [
"@cite_16",
"@cite_22",
"@cite_20"
],
"mid": [
"1977563497",
"2031705959",
"2032064287"
],
"abstract": [
"Functionally related genes often appear in each other's neighborhood on the genome; however, the order of the genes may not be the same. These groups or clusters of genes may have an ancient evolutionary origin or may signify some other critical phenomenon and may also aid in function prediction of genes. Such gene clusters also aid toward solving the problem of local alignment of genes. Similarly, clusters of protein domains, albeit appearing in different orders in the protein sequence, suggest common functionality in spite of being nonhomologous. In the paper, we address the problem of automatically discovering clusters of entities, be they genes or domains: we formalize the abstract problem as a discovery problem called the πpattern problem and give an algorithm that automatically discovers the clusters of patterns in multiple data sequences. We take a model-less approach and introduce a notation for maximal patterns that drastically reduces the number of valid cluster patterns, without any loss of inf...",
"Abstract We investigate a problem which arises in computational biology: Given a constant-size alphabet A with a weight function μ : A → N , find an efficient data structure and query algorithm solving the following problem: For a string σ over A and a weight M∈ N , decide whether σ contains a substring with weight M , where the weight of a string is the sum of the weights of its letters (O NE -S TRING M ASS F INDING P ROBLEM ). If the answer is yes , then we may in addition require a witness, i.e., indices i ⩽ j such that the substring beginning at position i and ending at position j has weight M . We allow preprocessing of the string and measure efficiency in two parameters: storage space required for the preprocessed data and running time of the query algorithm for given M . We are interested in data structures and algorithms requiring subquadratic storage space and sublinear query time, where we measure the input size as the length n of the input string σ . Among others, we present two non-trivial efficient algorithms: L OOKUP solves the problem with O( n ) storage space and O (n log n) time; I NTERVAL solves the problem for binary alphabets with O( n ) storage space in O ( log n) query time. We introduce other variants of the problem and sketch how our algorithms may be extended for these variants. Finally, we discuss combinatorial properties of weighted strings.",
"We consider the problem of fingerprinting text by sets of symbols. Specifically, if S is a string, of length n, over a finite, ordered alphabet Σ, and S' is a substring of S, then the fingerprint of S' is the subset φ of Σ of precisely the symbols appearing in S'. In this paper we show efficient methods of answering various queries on fingerprint statistics. Our preprocessing is done in time O(n|Σ|log n log |Σ|) and enables answering the following queries: (1) Given an integer k, compute the number of distinct fingerprints of size k in time O(1). (2) Given a set φ ⊆ Σ, compute the total number of distinct occurrences in S of substrings with fingerprint φ in time O(|Σ|logn)."
]
}
|
1102.0603
|
2047083748
|
In this paper, we present controllers that enable mobile robots to persistently monitor or sweep a changing environment. The environment is modeled as a field that is defined over a finite set of locations. The field grows linearly at locations that are not within the range of a robot and decreases linearly at locations that are within range of a robot. We assume that the robots travel on given closed paths. The speed of each robot along its path is controlled to prevent the field from growing unbounded at any location. We consider the space of speed controllers that are parametrized by a finite set of basis functions. For a single robot, we develop a linear program that computes a speed controller in this space to keep the field bounded, if such a controller exists. Another linear program is derived to compute the speed controller that minimizes the maximum field value over the environment. We extend our linear program formulation to develop a multirobot controller that keeps the field bounded. We characterize, both theoretically and in simulation, the robustness of the controllers to modeling errors and to stochasticity in the environment.
|
Although these works are well-motivated by the uncontested successes of Kalman filtering and Kriging in real-world estimation applications, they suffer from the fact that planning optimal trajectories under these models requires the solution of an intractable dynamic program, even for a static environment. One must resort to myopic methods, such as gradient descent (as in @cite_1 @cite_17 @cite_23 @cite_38 @cite_10 ), or solve the DP approximately over a finite time horizon (as in @cite_24 @cite_16 @cite_15 ). Although these methods have great appeal from an estimation point of view, little can be proved about the comparative performance of the control strategies employed in these works. The approach we take in this paper circumvents the question of estimation by formulating a new model of growing uncertainty in the environment. Under this model, we can solve the speed planning problem over , while maintaining levels of uncertainty in a environment. Thus we have used a less sophisticated environment model in order to obtain stronger results on the control strategy. Because our model is based on the analogy of dust collecting in an environment, we also solve infinite horizon sweeping problems with the same method.
|
{
"cite_N": [
"@cite_38",
"@cite_1",
"@cite_24",
"@cite_23",
"@cite_15",
"@cite_16",
"@cite_10",
"@cite_17"
],
"mid": [
"2169822448",
"1508222191",
"2101888913",
"2096108718",
"2172103629",
"2114549813",
"2112411455",
"2142705361"
],
"abstract": [
"Autonomous mobile sensor networks are employed to measure large-scale environmental fields. Yet an optimal strategy for mission design addressing both the cooperative motion control and the cooperative sensing is still an open problem. We develop strategies for multiple sensor platforms to explore a noisy scalar field in the plane. Our method consists of three parts. First, we design provably convergent cooperative Kalman filters that apply to general cooperative exploration missions. Second, we present a novel method to determine the shape of the platform formation to minimize error in the estimates and design a cooperative formation control law to asymptotically achieve the optimal formation shape. Third, we use the cooperative filter estimates in a provably convergent motion control law that drives the center of the platform formation to move along level curves of the field. This control law can be replaced by control laws enabling other cooperative exploration motion, such as gradient climbing, without changing the cooperative filters and the cooperative formation control laws. Performance is demonstrated on simulated underwater platforms in simulated ocean fields.",
"",
"This work deals with trajectory optimization for a robotic sensor network sampling a spatio-temporal random field. We examine the optimal sampling problem of minimizing the maximum predictive variance of the estimator over the space of network trajectories. This is a high-dimensional, multi-modal, nonsmooth optimization problem, known to be NP-hard even for static fields and discrete design spaces. Under an asymptotic regime of near-independence between distinct sample locations, we show that the solutions to a novel generalized disk-covering problem are solutions to the optimal sampling problem. This result effectively transforms the search for the optimal trajectories into a geometric optimization problem. Constrained versions of the latter are also of interest as they can accommodate trajectories that satisfy a maximum velocity restriction on the robots. We characterize the solution for the unconstrained and constrained versions of the geometric optimization problem as generalized multicircumcenter trajectories, and provide algorithms which enable the network to find them in a distributed fashion. Several simulations illustrate our results.",
"We describe a framework for the design of collective behaviors for groups of identical mobile agents. The approach is based on decentralized simultaneous estimation and control, where each agent communicates with neighbors and estimates the global performance properties of the swarm needed to make a local control decision. Challenges of the approach include designing a control law with desired convergence properties, assuming each agent has perfect global knowledge; designing an estimator that allows each agent to make correct estimates of the global properties needed to implement the controller; and possibly modifying the controller to recover desired convergence properties when using the estimates of global performance. We apply this framework to the problem of controlling the moment statistics describing the location and shape of a swarm. We derive conditions which guarantee that the formation statistics are driven to desired values, even in the presence of a changing network topology.",
"Exploration involving mapping and concurrent localization in an unknown environment is a pervasive task in mobile robotics. In general, the accuracy of the mapping process depends directly on the accuracy of the localization process. This paper address the problem of maximizing the accuracy of the map building process during exploration by adaptively selecting control actions that maximize localisation accuracy. The map building and exploration task is modeled using an Occupancy Grid (OG) with concurrent localisation performed using a feature-based Simultaneous Localisation And Mapping (SLAM) algorithm. Adaptive sensing aims at maximizing the map information by simultaneously maximizing the expected Shannon information gain (Mutual Information) on the OG map and minimizing the uncertainty of the vehicle pose and map feature uncertainty in the SLAM process. The resulting map building system is demonstrated in an indoor environment using data from a laser scanner mounted on a mobile platform.",
"We consider the problem of optimizing the trajectory of a mobile sensor with perfect localization whose task is to estimate a stochastic, perhaps multidimensional field modeling the environment. When the estimator is the Kalman filter, and for certain classes of objective functions capturing the informativeness of the sensor paths, the sensor trajectory optimization problem is a deterministic optimal control problem. This estimation problem arises in many applications besides the field estimation problem, such as active mapping with mobile robots. The main difficulties in solving this problem are computational, since the Gaussian process of interest is usually high dimensional. We review some recent work on this problem and propose a suboptimal non-greedy trajectory optimization scheme with a manageable computational cost, at least in static field models based on sparse graphical models.",
"This paper considers robotic sensor networks performing spatially-distributed estimation tasks. A robotic sensor network is deployed in an environment of interest, and takes successive point measurements of a dynamic physical process modeled as a spatio-temporal random field. Taking a Bayesian perspective on the Kriging interpolation technique from geostatistics, we design the distributed Kriged Kalman filter for predictive inference of the random field and of its gradient. The proposed algorithm makes use of a novel distributed strategy to compute weighted least squares estimates when measurements are spatially correlated. This strategy results from the combination of the Jacobi overrelaxation method with dynamic average consensus algorithms. As an application of the proposed algorithm, we design a gradient ascent cooperative strategy and analyze its convergence properties in the absence of measurement errors via stochastic Lyapunov functions. We illustrate our results in simulation.",
"Cooperating mobile sensors can be used to model environmental functions such as the temperature or salinity of a region of ocean. In this paper, we adopt an optimal filtering approach to fusing local sensor data into a global model of the environment. Our approach is based on the use of proportional-integral (PI) average consensus estimators, whereby information from each mobile sensor diffuses through the communication network. As a result, this approach is scalable and fully decentralized, and allows changing network topologies and anonymous agents to be added and subtracted at any time. We also derive control laws for mobile sensors to move to maximize their sensory information relative to current uncertainties in the model. The approach is demonstrated by simulations including modeling ocean temperature."
]
}
|
1102.0603
|
2047083748
|
In this paper, we present controllers that enable mobile robots to persistently monitor or sweep a changing environment. The environment is modeled as a field that is defined over a finite set of locations. The field grows linearly at locations that are not within the range of a robot and decreases linearly at locations that are within range of a robot. We assume that the robots travel on given closed paths. The speed of each robot along its path is controlled to prevent the field from growing unbounded at any location. We consider the space of speed controllers that are parametrized by a finite set of basis functions. For a single robot, we develop a linear program that computes a speed controller in this space to keep the field bounded, if such a controller exists. Another linear program is derived to compute the speed controller that minimizes the maximum field value over the environment. We extend our linear program formulation to develop a multirobot controller that keeps the field bounded. We characterize, both theoretically and in simulation, the robustness of the controllers to modeling errors and to stochasticity in the environment.
|
Our problem in this paper is also related to sweep coverage, or lawn mowing and milling problems, in which robots with finite sensor footprints move over an environment so that every point in the environment is visited at least once by a robot. Lawn mowing and milling has been treated in @cite_27 and other works. Sweep coverage has recently been studied in @cite_9 , and in @cite_12 efficient sweep coverage algorithms are proposed for ant robots. A survey of sweep coverage is given in @cite_3 . Our problem is significantly different from these because our environment is dynamic, thereby requiring continual re-milling or re-sweeping. A different notion of persistent surveillance has been considered in @cite_35 and @cite_28 , where a persistent task is defined as one whose completion takes much longer than the life of a robot. While the terminology is similar, our problem is more concerned with the task (sweeping or monitoring) itself than with power requirements of individual robots.
|
{
"cite_N": [
"@cite_35",
"@cite_28",
"@cite_9",
"@cite_3",
"@cite_27",
"@cite_12"
],
"mid": [
"2141154005",
"2092297192",
"2022750663",
"1544032329",
"1535557537",
"1584030176"
],
"abstract": [
"Unmanned aerial vehicles (UAVs) are well-suited to a wide range of mission scenarios, such as search and rescue, border patrol, and military surveillance. The complex and distributed nature of these missions often requires teams of UAVs to work together. Furthermore, overall mission performance can be strongly influenced by vehicle failures or degradations, so an autonomous mission system must account for the possibility of these anomalies if it is to maximize performance. This paper presents a general health management methodology for designing mission systems that can anticipate the negative effects of various types of anomalies on the future mission state and choose actions that mitigate those effects. The formulation is then specialized to the problem of providing persistent surveillance coverage using a group of UAVs, where uncertain fuel usage dynamics and strong interdependence effects between vehicles must be considered. Finally, the paper presents results showing that the health-aware persistent surveillance planner based on this formulation exhibits excellent performance in both simulated and real flight test experiments.",
"This paper presents an extension of our previous work on the persistent surveillance problem. An extended problem formulation incorporates real-time changes in agent capabilities as estimated by an onboard health monitoring system in addition to the existing communication constraints, stochastic sensor failure and fuel flow models, and the basic constraints of providing surveillance coverage using a team of autonomous agents. An approximate policy for the persistent surveillance problem is computed using a parallel, distributed implementation of the approximate dynamic programming algorithm known as Bellman Residual Elimination. This paper also presents flight test results which demonstrate that this approximate policy correctly coordinates the team to simultaneously provide reliable surveillance coverage and a communications link for the duration of the mission and appropriately retasks agents to maintain these services in the event of agent capability degradation.",
"This paper presents an algorithm for the complete coverage of free space by a team of mobile robots. Our approach is based on a single robot coverage algorithm, which divides the target two-dimensional space into regions called cells, each of which can be covered with simple back-and-forth motions; the decomposition of free space in a collection of such cells is known as Boustrophedon decomposition. Single robot coverage is achieved by ensuring that the robot visits every cell. The new multi-robot coverage algorithm uses the same planar cell-based decomposition as the single robot approach, but provides extensions to handle how teams of robots cover a single cell and how teams are allocated among cells. This method allows planning to occur in a two-dimensional configuration space for a team of N robots. The robots operate under the restriction that communication between two robots is available only when they are within line of sight of each other.",
"This paper surveys recent results in coverage path planning, a new path planning approach that determines a path for a robot to pass over all points in its free space. Unlike conventional point-to-point path planning, coverage path planning enables applications such as robotic de-mining, snow removal, lawn mowing, car-body painting, machine milling, etc. This paper will focus on coverage path planning algorithms for mobile robots constrained to operate in the plane. These algorithms can be classified as either heuristic or complete. It is our conjecture that most complete algorithms use an exact cellular decomposition, either explicitly or implicitly, to achieve coverage. Therefore, this paper organizes the coverage algorithms into four categories: heuristic, approximate, partial-approximate and exact cellular decompositions. The final section describes some provably complete multi-robot coverage algorithms.",
"We study the problem of finding shortest tours paths for “lawn mowing” and “milling” problems: Given a region in the plane, and given the shape of a “cutter” (typically, a circle or a square), find a shortest tour path for the cutter such that every point within the region is covered by the cutter at some position along the tour path. In the milling version of the problem, the cutter is constrained to stay within the region. The milling problem arises naturally in the area of automatic tool path generation for NC pocket machining. The lawn mowing problem arises in optical inspection, spray painting, and optimal search planning. Both problems are NP-hard in general. We give efficient constant-factor approximation algorithms for both problems. In particular, we give a (3+e)-approximation algorithm for the lawn mowing problem and a 2.5-approximation algorithm for the milling problem. Furthermore, we give a simple 65-approximation algorithm for the TSP problem in simple grid graphs, which leads to an 115-approximation algorithm for milling simple rectilinear polygons.",
"Ant robots are simple creatures with limited sensing and computational capabilities. They have the advantage that they are easy to program and cheap to build. This makes it feasible to deploy groups of ant robots and take advantage of the resulting fault tolerance and parallelism. We study, both theoretically and in simulation, the behavior of ant robots for one-time or repeated coverage of terrain, as required for lawn mowing, mine sweeping, and surveillance. Ant robots cannot use conventional planning methods due to their limited sensing and computational capabilities. To overcome these limitations, we study navigation methods that are based on real-time (heuristic) search and leave markings in the terrain, similar to what real ants do. These markings can be sensed by all ant robots and allow them to cover terrain even if they do not communicate with each other except via the markings, do not have any kind of memory, do not know the terrain, cannot maintain maps of the terrain, nor plan complete paths. The ant robots do not even need to be localized, which completely eliminates solving difficult and time-consuming localization problems. We study two simple real-time search methods that differ only in how the markings are updated. We show experimentally that both real-time search methods robustly cover terrain even if the ant robots are moved without realizing this (say, by people running into them), some ant robots fail, and some markings get destroyed. Both real-time search methods are algorithmically similar, and our experimental results indicate that their cover time is similar in some terrains. Our analysis is therefore surprising. We show that the cover time of ant robots that use one of the real-time search methods is guaranteed to be polynomial in the number of locations, whereas the cover time of ant robots that use the other real-time search method can be exponential in (the square root of) the number of locations even in simple terrains that correspond to (planar) undirected trees."
]
}
|
1102.0603
|
2047083748
|
In this paper, we present controllers that enable mobile robots to persistently monitor or sweep a changing environment. The environment is modeled as a field that is defined over a finite set of locations. The field grows linearly at locations that are not within the range of a robot and decreases linearly at locations that are within range of a robot. We assume that the robots travel on given closed paths. The speed of each robot along its path is controlled to prevent the field from growing unbounded at any location. We consider the space of speed controllers that are parametrized by a finite set of basis functions. For a single robot, we develop a linear program that computes a speed controller in this space to keep the field bounded, if such a controller exists. Another linear program is derived to compute the speed controller that minimizes the maximum field value over the environment. We extend our linear program formulation to develop a multirobot controller that keeps the field bounded. We characterize, both theoretically and in simulation, the robustness of the controllers to modeling errors and to stochasticity in the environment.
|
A problem more closely related to ours is that of patrolling @cite_21 @cite_25 , where an environment must be continually surveyed by a group of robots such that each point is visited with equal frequency. Similarly, in @cite_34 vehicles must repeatedly visit the cells of a gridded environment. Also, continual perimeter patrolling is addressed in @cite_19 . In another related work, a region is persistently covered in @cite_37 by controlling robots to move at constant speed along predefined paths. Our work is different from these, however, in that we treat the situation in which different parts of the environment may require different levels of attention. This is a significant difference as it induces a difficult resource trade-off problem as one typically finds in queuing theory @cite_31 , or dynamic vehicle routing @cite_33 @cite_6 . @cite_32 , the authors consider unequal frequency of visits in a gridded environment, but they control the robots using a greedy method that does not have performance guarantees.
|
{
"cite_N": [
"@cite_37",
"@cite_33",
"@cite_21",
"@cite_32",
"@cite_6",
"@cite_19",
"@cite_31",
"@cite_34",
"@cite_25"
],
"mid": [
"2025291543",
"1971739135",
"1603050797",
"2140903796",
"2006275224",
"2161128818",
"",
"2165298539",
"2148926063"
],
"abstract": [
"In this paper we address the problem of persistent coverage of a given convex polygonal region in the plane. We present an algorithmic solution for this problem that takes into account the limitations in the communication and sensing ranges of the agents. We show that our algorithm provides a persistent coverage period that is bounded away by a constant factor from a minimal tour among the centers of an optimal static cover, which is comprised of circles having the same radius as the sensing radius of the agents. A simulation is provided that illustrates the algorithm.",
"In 1991, D. J. Bertsimas and G. van Ryzin introduced and analyzed a model for stochastic and dynamic vehicle routing in which a single, uncapacitated vehicle traveling at a constant velocity in a Euclidean region must service demands whose time of arrival, location and on-site service are stochastic. The objective is to find a policy to service demands over an infinite horizon that minimizes the expected system time (wait plus service) of the demands. This paper extends our analysis in several directions. First, we analyze the problem of m identical vehicles with unlimited capacity and show that in heavy traffic the system time is reduced by a factor of 1 m2 over the single-server case. One of these policies improves by a factor of two on the best known system time for the single-server case. We then consider the case in which each vehicle can serve at most q customers before returning to a depot. We show that the stability condition in this case depends strongly on the geometry of the region. Several pol...",
"A group of agents can be used to perform patrolling tasks in a variety of domains ranging from computer network administration to computer wargame simulations. The multi-agent patrolling problem has recently received growing attention from the multi-agent community, due to the wide range of potential applications. Many algorithms based on reactive and cognitive architectures have been developed, giving encouraging results. However, no theoretical analysis of this problem has been conducted. In this paper, various classes of patrolling strategies are proposed and compared. More precisely, these classes are compared to the optimal strategy by means of a standard complexity analysis.",
"As mobile robots become increasingly autonomous over extended periods of time, opportunities arise for their use on repetitive tasks. We define and implement behaviors for a class of such tasks that we call continuous area sweeping tasks. A continuous area sweeping task is one in which a group of robots must repeatedly visit all points in a fixed area, possibly with nonuniform frequency, as specified by a task-dependent cost function. Examples of problems that need continuous area sweeping are trash removal in a large building and routine surveillance. In our previous work we have introduced a single-robot approach to this problem. In this paper, we extend that approach to multi-robot scenarios. The focus of this paper is adaptive and decentralized task assignment in continuous area sweeping problems, with the aim of ensuring stability in environments with dynamic factors, such as robot malfunctions or the addition of new robots to the team. Our proposed negotiation-based approach is fully implemented and tested both in simulation and on physical robots",
"In this paper we introduce a dynamic vehicle routing problem in which there are multiple vehicles and multiple priority classes of service demands. Service demands of each priority class arrive in the environment randomly over time and require a random amount of on-site service that is characteristic of the class. To service a demand, one of the vehicles must travel to the demand location and remain there for the required on-site service time. The quality of service provided to each class is given by the expected delay between the arrival of a demand in the class and that demand's service completion. The goal is to design a routing policy for the service vehicles which minimizes a convex combination of the delays for each class. First, we provide a lower bound on the achievable values of the convex combination of delays. Then, we propose a novel routing policy and analyze its performance under heavy-load conditions (i.e., when the fraction of time the service vehicles spend performing on-site service approaches one). The policy performs within a constant factor of the lower bound, where the constant depends only on the number of classes, and is independent of the number of vehicles, the arrival rates of demands, the on-site service times, and the convex combination coefficients.",
"This paper poses the cooperative perimeter-surveillance problem and offers a decentralized solution that accounts for perimeter growth (expanding or contracting) and insertion deletion of team members. By identifying and sharing the critical coordination information and by exploiting the known communication topology, only a small communication range is required for accurate performance. Simulation and hardware results are presented that demonstrate the applicability of the solution.",
"",
"Search and exploration using multiple autonomous sensing platforms has been extensively studied in the fields of controls and artificial intelligence. The task of persistent surveillance is different from a coverage or exploration problem, in that the target area needs to be continuously searched, minimizing the time between visitations to the same region. This difference does not allow a straightforward application of most exploration techniques to the problem, although ideas from these methods can still be used. In this research we investigate techniques that are scalable, reliable, efficient, and robust to problem dynamics. These are tested in a multiple unmanned air vehicle (UAV) simulation environment, developed for this program. A semi-heuristic control policy for a single UAV is extended to the case of multiple UAVs using two methods. One is an extension of a reactive policy for a single UAV and the other involves allocation of sub-regions to individual UAVs for parallel exploration. An optimal assignment procedure (based on auction algorithms) has also been developed for this purpose. A comparison is made between the two approaches and a simplified optimal result. The reactive policy is found to exhibit an interesting emergent behavior as the number of UAVs becomes large. The control policy derived for a single UAV is modified to account for actual aircraft dynamics (a 3 degree-of-freedom nonlinear dynamics simulation is used for this purpose) and improvements in performance are observed. Finally, we draw conclusions about the utility and efficiency of these techniques.",
"This paper discusses the problem of generating patrol paths for a team of mobile robots inside a designated target area. Patrolling requires an area to be visited repeatedly by the robot(s) in order to monitor its current state. First, we present frequency optimization criteria used for evaluation of patrol algorithms. We then present a patrol algorithm that guarantees maximal uniform frequency, i.e., each point in the target area is covered at the same optimal frequency. This solution is based on finding a circular path that visits all points in the area, while taking into account terrain directionality and velocity constraints. Robots are positioned uniformly along this path, using a second algorithm. Moreover, the solution is guaranteed to be robust in the sense that uniform frequency of the patrol is achieved as long as at least one robot works properly."
]
}
|
1101.6016
|
1566855083
|
In this paper, we tackle the problem of opportunistic spectrum access in large-scale cognitive radio networks, where the unlicensed Secondary Users (SU) access the frequency channels partially occupied by the licensed Primary Users (PU). Each channel is characterized by an availability probability unknown to the SUs. We apply evolutionary game theory to model the spectrum access problem and develop distributed spectrum access policies based on imitation, a behavior rule widely applied in human societies consisting of imitating successful behavior. We first develop two imitation-based spectrum access policies based on the basic Proportional Imitation (PI) rule and the more advanced Double Imitation (DI) rule given that a SU can imitate any other SUs. We then adapt the proposed policies to a more practical scenario where a SU can only imitate the other SUs operating on the same channel. A systematic theoretical analysis is presented for both scenarios on the induced imitation dynamics and the convergence properties of the proposed policies to an imitation-stable equilibrium, which is also the @math -optimum of the system. Simple, natural and incentive-compatible, the proposed imitation-based spectrum access policies can be implemented distributedly based on solely local interactions and thus is especially suited in decentralized adaptive learning environments as cognitive radio networks.
|
Due to the success of applying evolutionary game theory in the study of biological and economic problems, a handful of recent studies have applied evolutionary game theory as a tool to study resource allocation problems arisen from wired and wireless networks, among which Shakkottai addressed the problem of non-cooperative multi-homing of users to access points in IEEE 802.11 WLANs by modeling it as a population game and studied the equilibrium properties of the game @cite_10 ; Niyato studied the dynamics of network selection in a heterogeneous wireless network using the theory of evolutionary game and the replicator dynamic and proposed two network selection algorithm to reach the evolutionary equilibrium @cite_2 ; Ackermann investigated the concurrent imitation dynamics in the context of symmetric congestion games by focusing on the convergence properties @cite_7 ; Niyato studied the multiple-seller and multiple-buyer spectrum trading game in cognitive radio networks using the replicator dynamic and provided a theoretic analysis for the two-seller two-group-buyer case @cite_23 . Coucheney studied the user-network association problem in wireless networks with multi-technology and proposed an algorithm to achieve the fair and efficient solution @cite_1 .
|
{
"cite_N": [
"@cite_7",
"@cite_1",
"@cite_23",
"@cite_2",
"@cite_10"
],
"mid": [
"2088375330",
"",
"2139766131",
"2114005980",
"2161300749"
],
"abstract": [
"Imitating successful behavior is a natural and frequently applied approach when facing scenarios for which we have little or no experience upon which we can base our decision. In this paper, we consider such behavior in atomic congestion games. We propose to study concurrent imitation dynamics that emerge when each player samples another player and possibly imitates this agents' strategy if the anticipated latency gain is sufficiently large. Our main focus is on convergence properties. Using a potential function argument, we show that these dynamics converge in a monotonic fashion to stable states. In such a state none of the players can improve their latency by imitating others. As our main result, we show rapid convergence to approximate equilibria. At an approximate equilibrium only a small fraction of agents sustains a latency significantly above or below average. In particular, imitation dynamics behave like fully polynomial time approximation schemes (FPTAS). Fixing all other parameters, the convergence time depends only in a logarithmic fashion on the number of agents. Since imitation processes are not innovative they cannot discover unused strategies. Furthermore, strategies may become extinct with non-zero probability. For the case of singleton games, we show that the probability of this event occurring is negligible. Additionally, we prove that the social cost of a stable state reached by our dynamics is not much worse than an optimal state in singleton congestion games with linear latency functions. While we concentrate on the case of symmetric network congestion games, most of our results do not explicitly use the network structure. They continue to hold accordingly for general symmetric and asymmetric congestion games when each player samples within his commodity.",
"",
"We consider the problem of spectrum trading with multiple licensed users (i.e., primary users) selling spectrum opportunities to multiple unlicensed users (i.e., secondary users). The secondary users can adapt the spectrum buying behavior (i.e., evolve) by observing the variations in price and quality of spectrum offered by the different primary users or primary service providers. The primary users or primary service providers can adjust their behavior in selling the spectrum opportunities to secondary users to achieve the highest utility. In this paper, we model the evolution and the dynamic behavior of secondary users using the theory of evolutionary game. An algorithm for the implementation of the evolution process of a secondary user is also presented. To model the competition among the primary users, a noncooperative game is formulated where the Nash equilibrium is considered as the solution (in terms of size of offered spectrum to the secondary users and spectrum price). For a primary user, an iterative algorithm for strategy adaptation to achieve the solution is presented. The proposed game-theoretic framework for modeling the interactions among multiple primary users (or service providers) and multiple secondary users is used to investigate network dynamics under different system parameter settings and under system perturbation.",
"Next-generation wireless networks will integrate multiple wireless access technologies to provide seamless mobility to mobile users with high-speed wireless connectivity. This will give rise to a heterogeneous wireless access environment where network selection becomes crucial for load balancing to avoid network congestion and performance degradation. We study the dynamics of network selection in a heterogeneous wireless network using the theory of evolutionary games. The competition among groups of users in different service areas to share the limited amount of bandwidth in the available wireless access networks is formulated as a dynamic evolutionary game, and the evolutionary equilibrium is considered to be the solution to this game. We present two algorithms, namely, population evolution and reinforcement-learning algorithms for network selection. Although the network-selection algorithm based on population evolution can reach the evolutionary equilibrium faster, it requires a centralized controller to gather, process, and broadcast information about the users in the corresponding service area. In contrast, with reinforcement learning, a user can gradually learn (by interacting with the service provider) and adapt the decision on network selection to reach evolutionary equilibrium without any interaction with other users. Performance of the dynamic evolutionary game-based network-selection algorithms is empirically investigated. The accuracy of the numerical results obtained from the game model is evaluated by using simulations.",
"We consider non-cooperative mobiles, each faced with the problem of which subset of WLANs access points (APs) to connect and multihome to, and how to split its traffic among them. Considering the many users regime, we obtain a potential game model and study its equilibrium. We obtain pricing for which the total throughput is maximized at equilibrium and study the convergence to equilibrium under various evolutionary dynamics. We also study the case where the Internet service provider (ISP) could charge prices greater than that of the cost price mechanism and show that even in this case multihoming is desirable."
]
}
|
1101.5019
|
1679524110
|
We propose an algorithm to locate the most critical nodes to network robustness. Such critical nodes may be thought of as those most related to the notion of network centrality. Our proposal relies only on a localized spectral analysis of a limited subnetwork centered at each node in the network. We also present a procedure allowing the navigation from any node towards a critical node following only local information computed by the proposed algorithm. Experimental results confirm the effectiveness of our proposal considering networks of different scales and topological characteristics.
|
Network robustness is an important property derived from the connectivity level that directly impacts network reliability. There are many studies investigating network robustness in general and methods to evaluate network connectivity level @cite_2 @cite_13 @cite_9 . Nevertheless, to the best of our knowledge, only a few recent works target the distributed evaluation and location of the most critical nodes to network robustness, thus assessing node centrality @cite_10 @cite_20 @cite_18 in a distributed way.
|
{
"cite_N": [
"@cite_13",
"@cite_18",
"@cite_9",
"@cite_2",
"@cite_10",
"@cite_20"
],
"mid": [
"2070207525",
"1966279959",
"2170362389",
"2065769502",
"2103054781",
"2041757699"
],
"abstract": [
"Recent work on the Internet, social networks, and the power grid has addressed the resilience of these networks to either random or targeted deletion of network nodes or links. Such deletions include, for example, the failure of Internet routers or power transmission lines. Percolation models on random graphs provide a simple representation of this process but have typically been limited to graphs with Poisson degree distribution at their vertices. Such graphs are quite unlike real-world networks, which often possess power-law or other highly skewed degree distributions. In this paper we study percolation on graphs with completely general degree distribution, giving exact solutions for a variety of cases, including site percolation, bond percolation, and models in which occupation probabilities depend on vertex degree. We discuss the application of our theory to the understanding of network resilience.",
"Assessing network vulnerability before potential disruptive events such as natural disasters or malicious attacks is vital for network planning and risk management. It enables us to seek and safeguard against most destructive scenarios in which the overall network connectivity falls dramatically. Existing vulnerability assessments mainly focus on investigating the inhomogeneous properties of graph elements, node degree for example, however, these measures and the corresponding heuristic solutions can provide neither an accurate evaluation over general network topologies, nor performance guarantees to large scale networks. To this end, in this paper, we investigate a measure called pairwise connectivity and formulate this vulnerability assessment problem as a new graph-theoretical optimization problem called β-disruptor, which aims to discover the set of critical node edges, whose removal results in the maximum decline of the global pairwise connectivity. Our results consist of the NP-Completeness and inapproximability proof of this problem, an O(log n loglog n) pseudo-approximation algorithm for detecting the set of critical nodes and an O(log^1.5 n) pseudo-approximation algorithm for detecting the set of critical edges. In addition, we devise an efficient heuristic algorithm and validate the performance of the our model and algorithms through extensive simulations.",
"We consider the issue of protection in very large networks displaying randomness in topology. We employ random graph models to describe such networks, and obtain probabilistic bounds on several parameters related to reliability. In particular, we take the case of random regular networks for simplicity and consider the length of primary and backup paths in terms of the number of hops. First, for a randomly picked pair of nodes, we derive a lower bound on the average distance between the pair and discuss the tightness of the bound. In addition, noting that primary and protection paths form cycles, we obtain a lower bound on the average length of the shortest cycle around the pair. Finally, we show that the protected connections of a given maximum finite length are rare. We then generalize our network model so that different degrees are allowed according to some arbitrary distribution, and show that the second moment of degree over the first moment is an important shorthand for behavior of a network. Notably, we show that most of the results in regular networks carry over with minor modifications, which significantly broadens the scope of networks to which our approach applies. We present as an example the case of networks with a power-law degree distribution.",
"Many complex systems display a surprising degree of tolerance against errors. For example, relatively simple organisms grow, persist and reproduce despite drastic pharmaceutical or environmental interventions, an error tolerance attributed to the robustness of the underlying metabolic network1. Complex communication networks2 display a surprising degree of robustness: although key components regularly malfunction, local failures rarely lead to the loss of the global information-carrying ability of the network. The stability of these and other complex systems is often attributed to the redundant wiring of the functional web defined by the systems' components. Here we demonstrate that error tolerance is not shared by all redundant systems: it is displayed only by a class of inhomogeneously wired networks, called scale-free networks, which include the World-Wide Web3,4,5, the Internet6, social networks7 and cells8. We find that such networks display an unexpected degree of robustness, the ability of their nodes to communicate being unaffected even by unrealistically high failure rates. However, error tolerance comes at a high price in that these networks are extremely vulnerable to attacks (that is, to the selection and removal of a few nodes that play a vital role in maintaining the network's connectivity). Such error tolerance and attack vulnerability are generic properties of communication networks.",
"Centrality is a concept often used in social network analysis to study different properties of networks that are modeled as graphs. We present a new centrality metric called localized bridging centrality (LBC). LBC is based on the bridging centrality (BC) metric that recently introduced. Bridging nodes are nodes that are strategically located in between highly connected regions. LBC is capable of identifying bridging nodes with an accuracy comparable to that of the BC metric for most networks. As the name suggests, we use only local information from surrounding nodes to compute the LBC metric, whereas, global knowledge is required to calculate the BC metric. The main difference between LBC and BC is that LBC uses the egocentric definition of betweenness centrality to identify bridging nodes, while BC uses the sociocentric definition of betweenness centrality. Thus, our LBC metric is suitable for distributed or parallel computation and has the benefit of being an order of magnitude faster to calculate in computational complexity. We compare the results produced by BC and LBC in three examples. We applied our LBC metric for network analysis of a real wireless mesh network. Our results indicate that the LBC metric is as powerful as the BC metric at identifying bridging nodes. The LBC metric is thus an important tool that can help network administrators identify critical nodes that are important for the robustness of the network in a distributed manner.",
"A complex network can be modeled as a graph representing the ''who knows who'' relationship. In the context of graph theory for social networks, the notion of centrality is used to assess the relative importance of nodes in a given network topology. For example, in a network composed of large dense clusters connected through only a few links, the nodes involved in those links are particularly critical as far as the network survivability is concerned. This may also impact any application running on top of it. Such information can be exploited for various topological maintenance issues to prevent congestion and disruption. This can also be used offline to identify the most important actors in large social interaction graphs. Several forms of centrality have been proposed so far. Yet, they suffer from imperfections: initially designed for small social graphs, they are either of limited use (degree centrality), either incompatible in a distributed setting (e.g. random walk betweenness centrality). In this paper we introduce a novel form of centrality: the second order centrality which can be computed in a distributed manner. This provides locally each node with a value reflecting its relative criticity and relies on a random walk visiting the network in an unbiased fashion. To this end, each node records the time elapsed between visits of that random walk (called return time in the sequel) and computes the standard deviation (or second order moment) of such return times. The key point is that central nodes see regularly the random walk compared to other topology nodes. Both through theoretical analysis and simulation, we show that the standard deviation can be used to accurately identify critical nodes as well as to globally characterize graphs topology in a distributed way. We finally compare our proposal to well-known centralities to assess its competitivity."
]
}
|
1101.5509
|
1630388165
|
Privacy-preserving techniques for distributed computation have been proposed recently as a promising framework in collaborative inter-domain network monitoring. Several different approaches exist to solve such class of problems, e.g., Homomorphic Encryption (HE) and Secure Multiparty Computation (SMC) based on Shamir's Secret Sharing algorithm (SSS). Such techniques are complete from a computation-theoretic perspective: given a set of private inputs, it is possible to perform arbitrary computation tasks without revealing any of the intermediate results. In fact, HE and SSS can operate also on secret inputs and or provide secret outputs. However, they are computationally expensive and do not scale well in the number of players and or in the rate of computation tasks. In this paper we advocate the use of "elementary" (as opposite to "complete") Secure Multiparty Computation (E-SMC) procedures for traffic monitoring. E-SMC supports only simple computations with private input and public output, i.e., it can not handle secret input nor secret (intermediate) output. Such a simplification brings a dramatic reduction in complexity and enables massive-scale implementation with acceptable delay and overhead. Notwithstanding its simplicity, we claim that an E-SMC scheme is sufficient to perform a great variety of computation tasks of practical relevance to collaborative network monitoring, including, e.g., anonymous publishing and set operations. This is achieved by combining a E-SMC scheme with data structures like Bloom Filters and bitmap strings.
|
SMC is a cryptographic framework introduced by Yao @cite_2 and later generalized by @cite_4 . SMC techniques have been widely used in the data mining community. For a comprehensive survey, please refer to @cite_5 . @cite_6 first proposed the use of SMC techniques for a number of applications relating to traffic measurements, including the estimation of global traffic volume and performance measurements @cite_18 . In addition, the authors identified that SMC techniques can be combined with commonly-used traffic analysis methods and tools, such as time-series algorithms @cite_11 and sketch data structures.
|
{
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_6",
"@cite_2",
"@cite_5",
"@cite_11"
],
"mid": [
"",
"2027471022",
"2113261716",
"2092422002",
"1548445892",
"2135297873"
],
"abstract": [
"",
"We present a polynomial-time algorithm that, given as a input the description of a game with incomplete information and any number of players , produces a protocol for playing the game that leaks no partial information, provided the majority of the players is honest. Our algorithm automatically solves all the multi-party protocol problems addressed in complexity-based cryptography during the last 10 years. It actually is a completeness theorem for the class of distributed protocols with honest majority. Such completeness theorem is optimal in the sense that, if the majority of the players is not honest, some protocol problems have no efficient solution [C].",
"The rapid growth of the Internet over the last decade has been startling. However, efforts to track its growth have often fallen afoul of bad data --- for instance, how much traffic does the Internet now carry? The problem is not that the data is technically hard to obtain, or that it does not exist, but rather that the data is not shared. Obtaining an overall picture requires data from multiple sources, few of whom are open to sharing such data, either because it violates privacy legislation, or exposes business secrets. Likewise, detection of global Internet health problems is hampered by a lack of data sharing. The approaches used so far in the Internet, e.g. trusted third parties, or data anonymization, have been only partially successful, and are not widely adopted.The paper presents a method for performing computations on shared data without any participants revealing their secret data. For example, one can compute the sum of traffic over a set of service providers without any service provider learning the traffic of another. The method is simple, scalable, and flexible enough to perform a wide range of valuable operations on Internet data.",
"Two millionaires wish to know who is richer; however, they do not want to find out inadvertently any additional information about each other’s wealth. How can they carry out such a conversation? This is a special case of the following general problem. Suppose m people wish to compute the value of a function f(x1, x2, x3, . . . , xm), which is an integer-valued function of m integer variables xi of bounded range. Assume initially person Pi knows the value of xi and no other x’s. Is it possible for them to compute the value of f , by communicating among themselves, without unduly giving away any information about the values of their own variables? The millionaires’ problem corresponds to the case when m = 2 and f(x1, x2) = 1 if x1 < x2, and 0 otherwise. In this paper, we will give precise formulation of this general problem and describe three ways of solving it by use of one-way functions (i.e., functions which are easy to evaluate but hard to invert). These results have applications to secret voting, private querying of database, oblivious negotiation, playing mental poker, etc. We will also discuss the complexity question “How many bits need to be exchanged for the computation”, and describe methods to prevent participants from cheating. Finally, we study the question “What cannot be accomplished with one-way functions”. Before describing these results, we would like to put this work in perspective by first considering a unified view of secure computation in the next section.",
"Advances in hardware technology have increased the capability to store and record personal data about consumers and individuals, causing concerns that personal data may be used for a variety of intrusive or malicious purposes. Privacy-Preserving Data Mining: Models and Algorithms proposes a number of techniques to perform the data mining tasks in a privacy-preserving way. These techniques generally fall into the following categories: data modification techniques, cryptographic methods and protocols for data sharing, statistical techniques for disclosure and inference control, query auditing methods, randomization and perturbation-based techniques. This edited volume contains surveys by distinguished researchers in the privacy field. Each survey includes the key research content as well as future research directions. Privacy-Preserving Data Mining: Models and Algorithms is designed for researchers, professors, and advanced-level students in computer science, and is also suitable for industry practitioners.",
"Suppose a number of hospitals in a geographic area want to learn how their own heart-surgery unit is doing compared with the others in terms of mortality rates, subsequent complications, or any other quality metric. Similarly, a number of small businesses might want to use their recent point-of-sales data to cooperatively forecast future demand and thus make more informed decisions about inventory, capacity, employment, etc. These are simple examples of cooperative benchmarking and (respectively) forecasting that would benefit all participants as well as the public at large, as they would make it possible for participants to avail themselves of more precise and reliable data collected from many sources, to assess their own local performance in comparison to global trends, and to avoid many of the inefficiencies that currently arise because of having less information available for their decision-making. And yet, in spite of all these advantages, cooperative benchmarking and forecasting typically do not take place, because of the participants' unwillingness to share their information with others. Their reluctance to share is quite rational, and is due to fears of embarrassment, lawsuits, weakening their negotiating position (e.g., in case of over-capacity), revealing corporate performance and strategies, etc. The development and deployment of private benchmarking and forecasting technologies would allow such collaborations to take place without revealing any participant's data to the others, reaping the benefits of collaboration while avoiding the drawbacks. Moreover, this kind of technology would empower smaller organizations who could then cooperatively base their decisions on a much broader information base, in a way that is today restricted to only the largest corporations. This paper is a step towards this goal, as it gives protocols for forecasting and benchmarking that reveal to the participants the desired answers yet do not reveal to any participant any other participant's private data. We consider several forecasting methods, including linear regression and time series techniques such as moving average and exponential smoothing. One of the novel parts of this work, that further distinguishes it from previous work in secure multi-party computation, is that it involves floating point arithmetic, in particular it provides protocols to securely and efficiently perform division."
]
}
|
1101.5509
|
1630388165
|
Privacy-preserving techniques for distributed computation have been proposed recently as a promising framework in collaborative inter-domain network monitoring. Several different approaches exist to solve such class of problems, e.g., Homomorphic Encryption (HE) and Secure Multiparty Computation (SMC) based on Shamir's Secret Sharing algorithm (SSS). Such techniques are complete from a computation-theoretic perspective: given a set of private inputs, it is possible to perform arbitrary computation tasks without revealing any of the intermediate results. In fact, HE and SSS can operate also on secret inputs and or provide secret outputs. However, they are computationally expensive and do not scale well in the number of players and or in the rate of computation tasks. In this paper we advocate the use of "elementary" (as opposite to "complete") Secure Multiparty Computation (E-SMC) procedures for traffic monitoring. E-SMC supports only simple computations with private input and public output, i.e., it can not handle secret input nor secret (intermediate) output. Such a simplification brings a dramatic reduction in complexity and enables massive-scale implementation with acceptable delay and overhead. Notwithstanding its simplicity, we claim that an E-SMC scheme is sufficient to perform a great variety of computation tasks of practical relevance to collaborative network monitoring, including, e.g., anonymous publishing and set operations. This is achieved by combining a E-SMC scheme with data structures like Bloom Filters and bitmap strings.
|
However, for many years, SMC-based solutions have mainly been of theoretical interest due to impractical resource requirements. Only recently, generic SMC frameworks optimized for efficient processing of voluminous input data have been developed @cite_14 @cite_12 . Today, it is possible to process hundreds of thousands of elements distributed across dozens of networks within few minutes, for instance to generate distributed top-k reports @cite_13 . While these results are compelling, they stick to the completely secret evaluation scheme. Our work aims at boosting scalability even further by relaxing the secrecy constraint for intermediate results. As such, our approach can be applied only in cases where the disclosure of intermediate results is not regarded as critical --- a quite frequent case in practical applications. Moreover, we aim at optimizing the sharing scheme for fast computation in the online phase.
|
{
"cite_N": [
"@cite_14",
"@cite_13",
"@cite_12"
],
"mid": [
"25045116",
"1974646334",
""
],
"abstract": [
"Secure multiparty computation (MPC) allows joint privacy-preserving computations on data of multiple parties. Although MPC has been studied substantially, building solutions that are practical in terms of computation and communication cost is still a major challenge. In this paper, we investigate the practical usefulness of MPC for multi-domain network security and monitoring. We first optimize MPC comparison operations for processing high volume data in near real-time. We then design privacy-preserving protocols for event correlation and aggregation of network traffic statistics, such as addition of volume metrics, computation of feature entropy, and distinct item count. Optimizing performance of parallel invocations, we implement our protocols along with a complete set of basic operations in a library called SEPIA. We evaluate the running time and bandwidth requirements of our protocols in realistic settings on a local cluster as well as on PlanetLab and show that they work in near real-time for up to 140 input providers and 9 computation nodes. Compared to implementations using existing general-purpose MPC frameworks, our protocols are significantly faster, requiring, for example, 3 minutes for a task that takes 2 days with general-purpose frameworks. This improvement paves the way for new applications of MPC in the area of networking. Finally, we run SEPIA's protocols on real traffic traces of 17 networks and show how they provide new possibilities for distributed troubleshooting and early anomaly detection.",
"Over the past several years a lot of research has focused on distributed top-k computation. In this work we are interested in the following privacy-preserving distributed top-k problem. A set of parties hold private lists of key-value pairs and want to find and disclose the k key-value pairs with largest aggregate values without revealing any other information. We use secure multiparty computation (MPC) techniques to solve this problem and design two MPC protocols, PPTK and PPTKS, putting emphasis on their efficiency. PPTK uses a hash table to condense a possibly large and sparse space of keys and to probabilistically estimate the aggregate values of the top-k keys. PPTKS uses multiple hash tables, i.e., sketches, to improve the estimation accuracy of PPTK. We evaluate our protocols using real traffic traces and show that they accurately and efficiently aggregate distributions of IP addresses and port numbers to find the globally most frequent IP addresses and port numbers.",
""
]
}
|
1101.5509
|
1630388165
|
Privacy-preserving techniques for distributed computation have been proposed recently as a promising framework in collaborative inter-domain network monitoring. Several different approaches exist to solve such class of problems, e.g., Homomorphic Encryption (HE) and Secure Multiparty Computation (SMC) based on Shamir's Secret Sharing algorithm (SSS). Such techniques are complete from a computation-theoretic perspective: given a set of private inputs, it is possible to perform arbitrary computation tasks without revealing any of the intermediate results. In fact, HE and SSS can operate also on secret inputs and or provide secret outputs. However, they are computationally expensive and do not scale well in the number of players and or in the rate of computation tasks. In this paper we advocate the use of "elementary" (as opposite to "complete") Secure Multiparty Computation (E-SMC) procedures for traffic monitoring. E-SMC supports only simple computations with private input and public output, i.e., it can not handle secret input nor secret (intermediate) output. Such a simplification brings a dramatic reduction in complexity and enables massive-scale implementation with acceptable delay and overhead. Notwithstanding its simplicity, we claim that an E-SMC scheme is sufficient to perform a great variety of computation tasks of practical relevance to collaborative network monitoring, including, e.g., anonymous publishing and set operations. This is achieved by combining a E-SMC scheme with data structures like Bloom Filters and bitmap strings.
|
When it comes to analyzing traffic data across multiple networks, various anonymization techniques have been proposed for obscuring sensitive local information (e.g., @cite_1 ). However, these methods are generally not lossless and introduce a delicate privacy-utility tradeoff @cite_21 . Moreover, the capability of anonymization to protect privacy has recently been called in question, both from a technical @cite_15 and a legal perspective @cite_3 .
|
{
"cite_N": [
"@cite_15",
"@cite_21",
"@cite_1",
"@cite_3"
],
"mid": [
"2151856781",
"2147215426",
"1594076931",
"2557607105"
],
"abstract": [
"In recent years, academic literature has analyzed many attacks on network trace anonymization techniques. These attacks usually correlate external information with anonymized data and successfully de-anonymize objects with distinctive signatures. However, analyses of these attacks still underestimate the real risk of publishing anonymized data, as the most powerful attack against anonymization is traffic injection. We demonstrate that performing live traffic injection attacks against anonymization on a backbone network is not difficult, and that potential countermeasures against these attacks, such as traffic aggregation, randomization or field generalization, are not particularly effective. We then discuss tradeoffs of the attacker and defender in the so-called injection attack space. An asymmetry in the attack space significantly increases the chance of a successful de-anonymization through lengthening the injected traffic pattern. This leads us to re-examine the role of network data anonymization. We recommend a unified approach to data sharing, which uses anonymization as a part of a technical, legal, and social approach to data protection in the research and operations communities.",
"Releasing network measurement data---including packet traces---to the research community is a virtuous activity that promotes solid research. However, in practice, releasing anonymized packet traces for public use entails many more vexing considerations than just the usual notion of how to scramble IP addresses to preserve privacy. Publishing traces requires carefully balancing the security needs of the organization providing the trace with the research usefulness of the anonymized trace. In this paper we recount our experiences in (i) securing permission from a large site to release packet header traces of the site's internal traffic, (ii) implementing the corresponding anonymization policy, and (iii) validating its correctness. We present a general tool, tcpmkpub, for anonymizing traces, discuss the process used to determine the particular anonymization policy, and describe the use of metadata accompanying the traces to provide insight into features that have been obfuscated by anonymization",
"FLAIM (Framework for Log Anonymization and Information Management) addresses two important needs not well addressed by current log anonymizers. First, it is extremely modular and not tied to the specific log being anonymized. Second, it supports multi-level anonymization, allowing system administrators to make fine-grained trade-offs between information loss and privacy security concerns. In this paper, we examine anonymization solutions to date and note the above limitations in each. We further describe how FLAIM addresses these problems, and we describe FLAIM's architecture and features in detail.",
"Computer scientists have recently undermined our faith in the privacy-protecting power of anonymization, the name for techniques for protecting the privacy of individuals in large databases by deleting information like names and social security numbers. These scientists have demonstrated they can often 'reidentify' or 'deanonymize' individuals hidden in anonymized data with astonishing ease. By understanding this research, we will realize we have made a mistake, labored beneath a fundamental misunderstanding, which has assured us much less privacy than we have assumed. This mistake pervades nearly every information privacy law, regulation, and debate, yet regulators and legal scholars have paid it scant attention. We must respond to the surprising failure of anonymization, and this Article provides the tools to do so."
]
}
|
1101.3979
|
2122725346
|
Network coding permits to deploy distributed packet delivery algorithms that locally adapt to the network availability in media streaming applications. However, it may also increase delay and computational complexity if it is not implemented efficiently. We address here the effective placement of a limited number of nodes that implement randomized network coding in overlay networks, so that the goodput is kept high while the delay for decoding stays small in streaming applications. We first estimate the decoding delay at each client, which depends on the innovative rate in the network. This estimation permits to identify the nodes that have to perform coding in order to reduce the decoding delay. We then propose two iterative algorithms for selecting the nodes that should perform network coding. The first algorithm relies on the knowledge of the full network statistics. The second algorithm uses only local network statistics at each node. Simulation results show that large performance gains can be achieved with the selection of only a few network coding nodes. Moreover, the second algorithm performs very closely to the central estimation strategy, which demonstrates that the network coding nodes can be selected efficiently with help of a distributed innovative flow rate estimation solution. Our solution provides large gains in terms of throughput, delay, and video quality in realistic overlay networks when compared to methods that employ traditional streaming strategies as well as random network coding nodes selection algorithms.
|
While the previous work mostly consider that the network is fully known at a central node, a decentralized algorithm for minimizing the number of network coding packets flowing in a network has been presented in @cite_11 . It also addresses the design capacity approaching network codes that minimize the set of network coding nodes. However, this algorithm does not provide any guarantee that the minimum set of network coding nodes can always be determined. While @cite_11 consider capacity approaching codes without delay constraints, we rather use well performing network codes and consider the available resources in the network in order to select a set of network coding nodes, such that the overall delay is kept small in multimedia applications. The choice of randomized network codes is mostly geared towards the implementation of practical distributed systems where large benefits are expected by the proper choice of a limited number of network coding nodes.
|
{
"cite_N": [
"@cite_11"
],
"mid": [
"1997825969"
],
"abstract": [
"We give an information flow interpretation for multicasting using network coding. This generalizes the fluid model used to represent flows to a single receiver. Using the generalized model, we present a decentralized algorithm to minimize the number of packets that undergo network coding. We also propose a decentralized algorithm to construct capacity achieving multicast codes when the processing at some nodes is restricted to routing. The proposed algorithms can be coupled with existing decentralized schemes to achieve minimum cost multicast"
]
}
|
1101.3979
|
2122725346
|
Network coding permits to deploy distributed packet delivery algorithms that locally adapt to the network availability in media streaming applications. However, it may also increase delay and computational complexity if it is not implemented efficiently. We address here the effective placement of a limited number of nodes that implement randomized network coding in overlay networks, so that the goodput is kept high while the delay for decoding stays small in streaming applications. We first estimate the decoding delay at each client, which depends on the innovative rate in the network. This estimation permits to identify the nodes that have to perform coding in order to reduce the decoding delay. We then propose two iterative algorithms for selecting the nodes that should perform network coding. The first algorithm relies on the knowledge of the full network statistics. The second algorithm uses only local network statistics at each node. Simulation results show that large performance gains can be achieved with the selection of only a few network coding nodes. Moreover, the second algorithm performs very closely to the central estimation strategy, which demonstrates that the network coding nodes can be selected efficiently with help of a distributed innovative flow rate estimation solution. Our solution provides large gains in terms of throughput, delay, and video quality in realistic overlay networks when compared to methods that employ traditional streaming strategies as well as random network coding nodes selection algorithms.
|
In general, the previous works about the selection of coding nodes do not consider delay issues, which are most important in streaming applications. The problem of the selection of network processing nodes in multimedia streaming applications has been addressed in @cite_15 in a framework that is however slightly different than ours. The placement of a limited number of network-embedded FEC nodes (NEF) is considered in networks that are organized into multicast trees. The placement is chosen in order to enhance the robustness to transmission errors and to improve the network's throughput. NEF nodes first decode and successively re-encode the recovered packets in order to increase the symbol diversity. A greedy algorithm is proposed for placing NEF nodes. Although the proposed method is efficient, it is computationally expensive and unrealistic to be deployed in dynamic networks. In contrast to @cite_15 , we consider the placement of processing nodes in the more general case of overlay mesh networks with randomized network coding for distributed packet delivery.
|
{
"cite_N": [
"@cite_15"
],
"mid": [
"2093784538"
],
"abstract": [
"Forward error correction (FEC) schemes have been proposed and used successfully for multicasting realtime video content to groups of users. Under traditional IP multicast, application-level FEC can only be implemented on an end-to-end basis between the sender and the clients. Emerging overlay and peer-to-peer (p2p) networks open the door for new paradigms of network FEC. The deployment of FEC within these emerging networks has received very little attention (if any). In this paper, we analyze and optimize the impact of network-embedded FEC (NEF) in overlay and p2p multimedia multicast networks. Under NEF, we place FEC codecs in selected intermediate nodes of a multicast tree. The NEF codecs detect and recover lost packets within FEC blocks at earlier stages before these blocks arrive at deeper intermediate nodes or at the final leaf nodes. This approach significantly reduces the probability of receiving undecodable FEC blocks. In essence, the proposed NEF codecs work as signal regenerators in a communication system and can reconstruct most of the lost data packets without requiring retransmission. We develop an optimization algorithm for the placement of NEF codecs within random multicast trees. Based on extensive H.264 video simulations, we show that this approach provides significant improvements in video quality, both visually and in terms of PSNR values."
]
}
|
1101.3979
|
2122725346
|
Network coding permits to deploy distributed packet delivery algorithms that locally adapt to the network availability in media streaming applications. However, it may also increase delay and computational complexity if it is not implemented efficiently. We address here the effective placement of a limited number of nodes that implement randomized network coding in overlay networks, so that the goodput is kept high while the delay for decoding stays small in streaming applications. We first estimate the decoding delay at each client, which depends on the innovative rate in the network. This estimation permits to identify the nodes that have to perform coding in order to reduce the decoding delay. We then propose two iterative algorithms for selecting the nodes that should perform network coding. The first algorithm relies on the knowledge of the full network statistics. The second algorithm uses only local network statistics at each node. Simulation results show that large performance gains can be achieved with the selection of only a few network coding nodes. Moreover, the second algorithm performs very closely to the central estimation strategy, which demonstrates that the network coding nodes can be selected efficiently with help of a distributed innovative flow rate estimation solution. Our solution provides large gains in terms of throughput, delay, and video quality in realistic overlay networks when compared to methods that employ traditional streaming strategies as well as random network coding nodes selection algorithms.
|
Finally, game theoretic concepts are adopted in a recent work @cite_7 for developing socially optimal distributed algorithms that decide on the nodes that should combine packets. Specifically, incentives such as extra download bandwidth are given to network nodes in order to change their status to network coding and indirectly minimize the delays in the system. However, this algorithm does not offer any guarantee that limited resources will be used efficiently, since all the nodes may potentially desire to become network coding nodes. It is not appropriate when a certain number of network coding nodes shall be placed effectively in a network.
|
{
"cite_N": [
"@cite_7"
],
"mid": [
"2130155168"
],
"abstract": [
"Network coding has been recently proposed as an efficient method to improve throughput, minimize delays and remove the need for reconciliation between network nodes in distributed streaming systems. It permits to take advantage of the path and node diversity in the network when the network coding nodes are placed efficiently. In this paper, we investigate networks consisting of nodes that autonomously determine whether they should perform network coding or not as well as their set of parent nodes. Each node makes its decisions that maximize its quality of service. The decisions include the selection of operation mode (i.e., network coding mode, simple data forwarding mode) and the selection of extra connections. The resulting interactions among the nodes are modeled as a congestion game, thereby ensuring an equilibrium, i.e., stable multimedia stream flow. The experimental results show that the proposed scheme is appropriate for distributed multimedia transmission since it provides a stable quality without imposing centralized control."
]
}
|
1101.4351
|
1650964121
|
Chaotic neural networks have received a great deal of attention these last years. In this paper we establish a precise correspondence between the so-called chaotic iterations and a particular class of artificial neural networks: global recurrent multi-layer perceptrons. We show formally that it is possible to make these iterations behave chaotically, as defined by Devaney, and thus we obtain the first neural networks proven chaotic. Several neural networks with different architectures are trained to exhibit a chaotical behavior.
|
Since a while neuroscientists discuss the existence of chaos in the brain. In the context of artificial neural networks, this interest has given raise to various works studying the modeling of chaos in neurons. The chaotic neuron model designed by Aihara @cite_11 is particularly used to build chaotic neural networks. For example, in @cite_16 is proposed a feedback ANN architecture which consists of two layers (apart from the input layer) with one of them composed of chaotic neurons. In their experiments, the authors showed that without any input sequence the activation of each chaotic neuron results in a positive average Lyapunov exponent, which means a true chaotic behavior. When an input sequence is given iteratively to the network the chaotic neurons reach stabilized periodic orbits with different periods, and thus potentially provide a recognition state. Similarly, the same authors have recently introduced another model of chaotic neuron: the non-linear dynamic state (NDS) neuron, and used it to build a neural network which is able to recognize learned stabilized periodic orbits identifying patterns @cite_9 .
|
{
"cite_N": [
"@cite_9",
"@cite_16",
"@cite_11"
],
"mid": [
"2077196676",
"45575550",
"2007099865"
],
"abstract": [
"This research investigates the potential utility of chaotic dynamics in neural information processing. A novel chaotic spiking neural network model is presented which is composed of non-linear dynamic state (NDS) neurons. The activity of each NDS neuron is driven by a set of non-linear equations coupled with a threshold based spike output mechanism. If time-delayed self-connections are enabled then the network stabilises to a periodic pattern of activation. Previous publications of this work have demonstrated that the chaotic dynamics which drive the network activity ensure that an extremely large number of such periodic patterns can be generated by this network. This paper presents a major extension to this model which enables the network to recall a pattern of activity from a selection of previously stabilised patterns.",
"The basic premise of this research is that deterministic chaos is a powerful mechanismfor the storage and retrieval of information in the dynamics of artificial neuralnetworks. Substantial evidence has been found in biological studies for the presenceof chaos in the dynamics of natural neuronal systems [1-3]. Many have suggestedthat this chaos plays a central role in memory storage and retrieval [1,4-6]. Indeed,chaos offers many advantages over alternative memory storage mechanisms used inartificial neural networks. One is that chaotic dynamics are significantly easier tocontrol than other linear or non-linear systems, requiring only small appropriatelytimed perturbations to constrain them within specific Unstable Periodic Orbits(UPOs). Another is that chaotic attractors contain an infinite number of these UPOs.If individual UPOs can be made to represent specific internal memory states of asystem, then in theory a chaotic attractor can provide an infinite memorystore for thesystem. In this paper we investigate the possibility that a network can self-selectUPOs in response to specific dynamic input signals. These UPOs correspond tonetwork recognition states for these input signals.",
"Abstract A model of a single neuron with chaotic dynamics is proposed by considering the following properties of biological neurons: (1) graded responses, (2) relative refractoriness and (3) spatio-temporal summation of inputs. The model includes some conventional models of a neuron as its special cases; namely, chaotic dynamics is introduced as a natural extension of the former models. Chaotic solutions of both the single chaotic neuron and the chaotic neural network composed of such neurons are numerically demonstrated."
]
}
|
1101.4351
|
1650964121
|
Chaotic neural networks have received a great deal of attention these last years. In this paper we establish a precise correspondence between the so-called chaotic iterations and a particular class of artificial neural networks: global recurrent multi-layer perceptrons. We show formally that it is possible to make these iterations behave chaotically, as defined by Devaney, and thus we obtain the first neural networks proven chaotic. Several neural networks with different architectures are trained to exhibit a chaotical behavior.
|
Today, another field of research in which chaotic neural networks have received a lot of attention is data security. In fact, chaotic cryptosystems are an appealing alternative to classical ones due to properties such as sensitivity to initial conditions or topological transitivity. Thus chaotic ANNs have been considered to build ciphering methods, hash functions, digital watermarking schemes, pseudo-random number generators, etc. In @cite_4 such a cipher scheme based on the dynamics of Chua's circuit is proposed. More precisely, a feed-forward MLP with two hidden layers is built to learn about 1500 input-output vector pairs, where each pair is obtained from the three nonlinear ordinary differential equations modeling the circuit. Hence, the proposed chaotic neural network is a network which is trained to learn a true chaotic physical system. In the cipher scheme the ANN plays the role of chaos generator with which the plain-text will be merged. Untrained neural networks have been also considered to define block ciphering @cite_13 or hash functions @cite_17 . The background idea is to exploit the inherent properties of the ANNs architecture such as diffusion and confusion.
|
{
"cite_N": [
"@cite_13",
"@cite_4",
"@cite_17"
],
"mid": [
"1520311163",
"2126402645",
"2088359425"
],
"abstract": [
"In this paper, the neural network composed of a chaotic neuron layer and a linear neuron layer is used to construct a block cipher that transforms the data from the plaintext form into the unintelligible form under the control of the key. Among them, the chaotic neuron layer realizes data diffusion, the linear neuron layer realizes data confusion, and the two layers are repeated for several times to strengthen the cipher. The decryption process is symmetric to the encryption process. Theoretical analysis and experimental results show that the block cipher has good computing security and is more suitable for image encryption. It is expected to attract more researchers in this field.",
"Chaotic systems are sensitive to initial conditions, system parameters and topological transitivity and these properties are also remarkable for cryptanalysts. Noise like behavior of chaotic systems is the main reason of using these systems in cryptology. However some properties of chaotic systems such as synchronization, fewness of parameters etc. cause serious problems for cryptology. In this paper, to overcome disadvantages of chaotic systems, the dynamics of Chua's circuit namely x , y and z were modeled using Artificial Neural Network (ANN). ANNs have some distinctive capabilities like learning from experiences, generalizing from a few data and nonlinear relationship between inputs and outputs. The proposed ANN was trained in different structures using different learning algorithms. To train the ANN, 24 different sets including the initial conditions of Chua's circuit were used and each set consisted of about 1800 input-output data. The experimental results showed that a feed-forward Multi Layer Perceptron (MLP), trained with Bayesian Regulation backpropagation algorithm, was found as the suitable network structure. As a case study, a message was first encrypted and then decrypted by the chaotic dynamics obtained from the proposed ANN and a comparison was made between the proposed ANN and the numerical solution of Chua's circuit about encrypted and decrypted messages.",
"An algorithm for constructing a one-way novel Hash function based on two-layer chaotic neural network structure is proposed. The piecewise linear chaotic map (PWLCM) is utilized as transfer function, and the 4-dimensional and one-way coupled map lattices (4D OWCML) is employed as key generator of the chaotic neural network. Theoretical analysis and computer simulation indicate that the proposed algorithm presents several interesting features, such as high message and key sensitivity, good statistical properties, collision resistance and secure against meet-in-the-middle attacks, which can satisfy the performance requirements of Hash function."
]
}
|
1101.4240
|
2035780823
|
Genetic regulatory networks enable cells to respond to changes in internal and external conditions by dynamically coordinating their gene expression profiles. Our ability to make quantitative measurements in these biochemical circuits has deepened our understanding of what kinds of computations genetic regulatory networks can perform, and with what reliability. These advances have motivated researchers to look for connections between the architecture and function of genetic regulatory networks. Transmitting information between a network's inputs and outputs has been proposed as one such possible measure of function, relevant in certain biological contexts. Here we summarize recent developments in the application of information theory to gene regulatory networks. We first review basic concepts in information theory necessary for understanding recent work. We then discuss the functional complexity of gene regulation, which arises from the molecular nature of the regulatory interactions. We end by reviewing some experiments that support the view that genetic networks responsible for early development of multicellular organisms might be maximizing transmitted 'positional information'.
|
Other applications of information theory to cell regulation have been developed, which consider bounds on information transmission in biological systems, such as finding the minimum rate at which information must be transmitted in the system to ensure the readout of the signal remains within a fixed value of the signal -- these bounds have much to do with the approach that views cells and organisms as trying to decoding'' noisy environmental signals and making optimal decisions based on these data @cite_47 . Information theory has been used to discuss chemotaxis @cite_74 @cite_19 , i.e. navigation on the basis of noisy inputs.
|
{
"cite_N": [
"@cite_19",
"@cite_47",
"@cite_74"
],
"mid": [
"",
"2102869475",
"2046695302"
],
"abstract": [
"",
"Cells must respond to environmental changes to remain viable, yet the information they receive is often noisy. Through a biochemical implementation of Bayes's rule, we show that genetic networks can act as inference modules, inferring from intracellular conditions the likely state of the extracellular environment and regulating gene expression appropriately. By considering a two-state environment, either poor or rich in nutrients, we show that promoter occupancy is proportional to the (posterior) probability of the high nutrient state given current intracellular information. We demonstrate that single-gene networks inferring and responding to a high environmental state infer best when negatively controlled, and those inferring and responding to a low environmental state infer best when positively controlled. Our interpretation is supported by experimental data from the lac operon and should provide a basis for both understanding more complex cellular decision-making and designing synthetic inference circuits.",
"A computational model of odour plume propagation and experimental data are used to devise a general search algorithm for movement strategies in chemotaxis, based on sporadic cues and partial information. The strategy is termed 'infotaxis' as it locally maximizes the expected rate of information gain."
]
}
|
1101.4240
|
2035780823
|
Genetic regulatory networks enable cells to respond to changes in internal and external conditions by dynamically coordinating their gene expression profiles. Our ability to make quantitative measurements in these biochemical circuits has deepened our understanding of what kinds of computations genetic regulatory networks can perform, and with what reliability. These advances have motivated researchers to look for connections between the architecture and function of genetic regulatory networks. Transmitting information between a network's inputs and outputs has been proposed as one such possible measure of function, relevant in certain biological contexts. Here we summarize recent developments in the application of information theory to gene regulatory networks. We first review basic concepts in information theory necessary for understanding recent work. We then discuss the functional complexity of gene regulation, which arises from the molecular nature of the regulatory interactions. We end by reviewing some experiments that support the view that genetic networks responsible for early development of multicellular organisms might be maximizing transmitted 'positional information'.
|
A recent paper has also raised an interesting topic of learning about biological systems from the way they systematically deviate from the optimality predictions @cite_62 .
|
{
"cite_N": [
"@cite_62"
],
"mid": [
"1982140705"
],
"abstract": [
"Optimization theory has been used to analyze evolutionary adaptation. This theory has explained many features of biological systems, from the genetic code to animal behavior. However, these systems show important deviations from optimality. Typically, these deviations are large in some particular components of the system, whereas others seem to be almost optimal. Deviations from optimality may be due to many factors in evolution, including stochastic effects and finite time, that may not allow the system to reach the ideal optimum. However, we still expect the system to have a higher probability of reaching a state with a higher value of the proposed indirect measure of fitness. In systems of many components, this implies that the largest deviations are expected in those components with less impact on the indirect measure of fitness. Here, we show that this simple probabilistic rule explains deviations from optimality in two very different biological systems. In Caenorhabditis elegans, this rule successfully explains the experimental deviations of the position of neurons from the configuration of minimal wiring cost. In Escherichia coli, the probabilistic rule correctly obtains the structure of the experimental deviations of metabolic fluxes from the configuration that maximizes biomass production. This approach is proposed to explain or predict more data than optimization theory while using no extra parameters. Thus, it can also be used to find and refine hypotheses about which constraints have shaped biological structures in evolution."
]
}
|
1101.4609
|
2949265624
|
Motivated by the growing interest in mobile systems, we study the dynamics of information dissemination between agents moving independently on a plane. Formally, we consider @math mobile agents performing independent random walks on an @math -node grid. At time @math , each agent is located at a random node of the grid and one agent has a rumor. The spread of the rumor is governed by a dynamic communication graph process @math , where two agents are connected by an edge in @math iff their distance at time @math is within their transmission radius @math . Modeling the physical reality that the speed of radio transmission is much faster than the motion of the agents, we assume that the rumor can travel throughout a connected component of @math before the graph is altered by the motion. We study the broadcast time @math of the system, which is the time it takes for all agents to know the rumor. We focus on the sparse case (below the percolation point @math ) where, with high probability, no connected component in @math has more than a logarithmic number of agents and the broadcast time is dominated by the time it takes for many independent random walks to meet each other. Quite surprisingly, we show that for a system below the percolation point the broadcast time does not depend on the relation between the mobility speed and the transmission radius. In fact, we prove that @math for any @math , even when the transmission range is significantly larger than the mobility range in one step, giving a tight characterization up to logarithmic factors. Our result complements a recent result of (SODA 2011) who showed that above the percolation point the broadcast time is polylogarithmic in @math .
|
A prolific line of research has addressed broadcasting and gossiping in static graphs, where the nodes of the graph represent active entities which exchange messages along incident edges according to specific protocols (e.g., , , ). The most recent results in this area relate the performance of the protocols to expansion properties of the underlying topology, with particular attention to the case of social networks, where broadcasting is often referred to as @cite_2 . (For a relatively recent, comprehensive survey on this subject, see @cite_8 .)
|
{
"cite_N": [
"@cite_8",
"@cite_2"
],
"mid": [
"1521790357",
"2009356484"
],
"abstract": [
"If you want to get Handbook of Internet Computing pdf eBook copy write by good Handbook of Wireless Networks and Mobile Computing Google Books. Mobile Computing General. Handbook of Algorithms for Wireless Networking and Mobile Computing by Azzedine Boukerche (Editor). Call Number: TK 5103.2. CITS4419 Mobile and Wireless Computing software projects related to wireless networks, (2) write technical reports and documentation for complex computer.",
"We show that if a connected graph with @math nodes has conductance φ then rumour spreading, also known as randomized broadcast, successfully broadcasts a message within O(φ-1 • log n), many rounds with high probability, regardless of the source, by using the PUSH-PULL strategy. The O(••) notation hides a polylog φ-1 factor. This result is almost tight since there exists graph of n nodes, and conductance φ, with diameter Ω(φ-1 • log n). If, in addition, the network satisfies some kind of uniformity condition on the degrees, our analysis implies that both both PUSH and PULL, by themselves, successfully broadcast the message to every node in the same number of rounds."
]
}
|
1101.4609
|
2949265624
|
Motivated by the growing interest in mobile systems, we study the dynamics of information dissemination between agents moving independently on a plane. Formally, we consider @math mobile agents performing independent random walks on an @math -node grid. At time @math , each agent is located at a random node of the grid and one agent has a rumor. The spread of the rumor is governed by a dynamic communication graph process @math , where two agents are connected by an edge in @math iff their distance at time @math is within their transmission radius @math . Modeling the physical reality that the speed of radio transmission is much faster than the motion of the agents, we assume that the rumor can travel throughout a connected component of @math before the graph is altered by the motion. We study the broadcast time @math of the system, which is the time it takes for all agents to know the rumor. We focus on the sparse case (below the percolation point @math ) where, with high probability, no connected component in @math has more than a logarithmic number of agents and the broadcast time is dominated by the time it takes for many independent random walks to meet each other. Quite surprisingly, we show that for a system below the percolation point the broadcast time does not depend on the relation between the mobility speed and the transmission radius. In fact, we prove that @math for any @math , even when the transmission range is significantly larger than the mobility range in one step, giving a tight characterization up to logarithmic factors. Our result complements a recent result of (SODA 2011) who showed that above the percolation point the broadcast time is polylogarithmic in @math .
|
@cite_21 @cite_24 the authors study the time it takes to broadcast information from one of @math mobile agents to all others. The agents move on a square grid of @math nodes and in each time step, an agent can (a) exchange information with all agents at distance at most @math from it, and (b) move to any random node at distance at most @math from its current position. The results in these papers only apply to a very dense scenario where the number of agents is linear in the number of grid nodes (i.e., @math ). They show that the broadcast time is @math w.h.p., when @math and @math @cite_21 , and it is @math w.h.p., when @math @cite_24 . These results crucially rely on @math , which implies that the range of agents' communications or movements at each step defines a connected graph.
|
{
"cite_N": [
"@cite_24",
"@cite_21"
],
"mid": [
"2949185120",
"2952627339"
],
"abstract": [
"We consider a Mobile Ad-hoc NETworks (MANET) formed by \"n\" nodes that move independently at random over a finite square region of the plane. Nodes exchange data if they are at distance at most \"r\" within each other, where r>0 is the node transmission radius. The \"flooding time\" is the number of time steps required to broadcast a message from a source node to every node of the network. Flooding time is an important measure of the speed of information spreading in dynamic networks. We derive a nearly-tight upper bound on the flooding time which is a decreasing function of the maximal \"velocity\" of the nodes. It turns out that, when the node velocity is sufficiently high, even if the node transmission radius \"r\" is far below the \"connectivity threshold\", the flooding time does not asymptotically depend on \"r\". This implies that flooding can be very fast even though every \"snapshot\" (i.e. the static random geometric graph at any fixed time) of the MANET is fully disconnected. Data reach all nodes quickly despite these ones use very low transmission power. Our result is the first analytical evidence of the fact that high, random node mobility strongly speed-up information spreading and, at the same time, let nodes save energy.",
"Markovian evolving graphs are dynamic-graph models where the links among a fixed set of nodes change during time according to an arbitrary Markovian rule. They are extremely general and they can well describe important dynamic-network scenarios. We study the speed of information spreading in the \"stationary phase\" by analyzing the completion time of the \"flooding mechanism\". We prove a general theorem that establishes an upper bound on flooding time in any stationary Markovian evolving graph in terms of its node-expansion properties. We apply our theorem in two natural and relevant cases of such dynamic graphs. \"Geometric Markovian evolving graphs\" where the Markovian behaviour is yielded by \"n\" mobile radio stations, with fixed transmission radius, that perform independent random walks over a square region of the plane. \"Edge-Markovian evolving graphs\" where the probability of existence of any edge at time \"t\" depends on the existence (or not) of the same edge at time \"t-1\". In both cases, the obtained upper bounds hold \"with high probability\" and they are nearly tight. In fact, they turn out to be tight for a large range of the values of the input parameters. As for geometric Markovian evolving graphs, our result represents the first analytical upper bound for flooding time on a class of concrete mobile networks."
]
}
|
1101.4609
|
2949265624
|
Motivated by the growing interest in mobile systems, we study the dynamics of information dissemination between agents moving independently on a plane. Formally, we consider @math mobile agents performing independent random walks on an @math -node grid. At time @math , each agent is located at a random node of the grid and one agent has a rumor. The spread of the rumor is governed by a dynamic communication graph process @math , where two agents are connected by an edge in @math iff their distance at time @math is within their transmission radius @math . Modeling the physical reality that the speed of radio transmission is much faster than the motion of the agents, we assume that the rumor can travel throughout a connected component of @math before the graph is altered by the motion. We study the broadcast time @math of the system, which is the time it takes for all agents to know the rumor. We focus on the sparse case (below the percolation point @math ) where, with high probability, no connected component in @math has more than a logarithmic number of agents and the broadcast time is dominated by the time it takes for many independent random walks to meet each other. Quite surprisingly, we show that for a system below the percolation point the broadcast time does not depend on the relation between the mobility speed and the transmission radius. In fact, we prove that @math for any @math , even when the transmission range is significantly larger than the mobility range in one step, giving a tight characterization up to logarithmic factors. Our result complements a recent result of (SODA 2011) who showed that above the percolation point the broadcast time is polylogarithmic in @math .
|
A model similar to our scenario is often employed to represent the spreading of computer viruses in networks and the spreading time is also referred to as . Kesten and Sidoravicius @cite_9 characterized the rate at which an infection spreads among particles performing continuous-time random walks with the same jump rate. In @cite_6 , the authors provide a general bound on the average infection time when @math agents (one of them initially affected by the virus) move in an @math -node graph. For general graphs, this bound is @math , where @math denotes the maximum average meeting time of two random walks on the graph, and the maximum is taken over all pairs of starting locations of the random walks. Also, in the paper tighter bounds are provided for the complete graph and for expanders. Observe that the @math bound specializes to @math for the @math -node grid by applying the known bound on @math of @cite_18 . A tight bound of @math on the infection time on the grid is claimed in @cite_0 , based on a rather informal argument where some unwarranted independence assumptions are made. Our results show that this latter bound is incorrect.
|
{
"cite_N": [
"@cite_0",
"@cite_9",
"@cite_18",
"@cite_6"
],
"mid": [
"2130482496",
"2899702797",
"1964081855",
"1974210749"
],
"abstract": [
"We study the process of the spread of an infection among mobile nodes moving on a finite, grid based map. A random walk and a novel adversarial model are considered as two extreme cases of node mobility. With N nodes, we present analytical and simulation results for both mobility models for a square grid map with size √G × √G. A key finding is that with random mobility the total time to infect all nodes decreases with N while with an adversarial model we observe a reverse trend. Specifically, the random case results in a total infection time of Θ(GlogGlogN (N) as opposed to the adversarial case where the total infection time is found to be Θ(√(Glog(N). We also explore the possibility of emulating such an infection process as a mobile interaction game with wireless sensor motes, and the above results are complimented by traces obtained from an empirical study with humans as players in an outdoor field.",
"",
"In the models we will consider, space is represented by a grid of sites that can be in one of a finite number of states and that change at rates that depend on the states of a finite number of sites. Our main aim here is to explain an idea of Durrett and Levin (1994): the behavior of these models can be predicted from the properties of the mean field ODE, i.e., the equations for the densities of the various types that result from pretending that all sites are always independent. We will illustrate this picture through a discussion of eight families of examples from statistical mechanics, genetics, population biology, epidemiology, and ecology. Some of our findings are only conjectures based on simulation, but in a number of cases we are able to prove results for systems with \"fast stirring\" by exploiting connections between the spatial model and an associated reaction diffusion equation.",
"Consider k particles, 1 red and k - 1 white, chasing each other on the nodes of a graph G. If the red one catches one of the white, it \"infects\" it with its color. The newly red particles are now available to infect more white ones. When is it the case that all white will become red? It turns out that this simple question is an instance of information propagation between random walks and has important applications to mobile computing where a set of mobile hosts acts as an intermediary for the spread of information. In this paper we model this problem by k concurrent random walks, one corresponding to the red particle and k - 1 to the white ones. The infection time Tk of infecting all the white particles with red color is then a random variable that depends on k, the initial position of the particles, the number of nodes and edges of the graph, as well as on the structure of the graph. In this work we develop a set of probabilistic tools that we use to obtain upper bounds on the (worst case w.r.t. initial positions of particles) expected value of Tk for general graphs and important special cases. We easily get that an upper bound on the expected value of Tk is the worst case (over all initial positions) expected meeting time m* of two random walks multiplied by Θ(log k). We demonstrate that this is, indeed, a tight bound; i.e. there is a graph G (a special case of the \"lollipop\" graph), a range of values k < n (such that √n - k = Θ(√n)) and an initial position of particles achieving this bound. When G is a clique or has nice expansion properties, we prove much smaller bounds for Tk. We have evaluated and validated all our results by large scale experiments which we also present and discuss here. In particular, the experiments demonstrate that our analytical results for these expander graphs are tight."
]
}
|
1101.4609
|
2949265624
|
Motivated by the growing interest in mobile systems, we study the dynamics of information dissemination between agents moving independently on a plane. Formally, we consider @math mobile agents performing independent random walks on an @math -node grid. At time @math , each agent is located at a random node of the grid and one agent has a rumor. The spread of the rumor is governed by a dynamic communication graph process @math , where two agents are connected by an edge in @math iff their distance at time @math is within their transmission radius @math . Modeling the physical reality that the speed of radio transmission is much faster than the motion of the agents, we assume that the rumor can travel throughout a connected component of @math before the graph is altered by the motion. We study the broadcast time @math of the system, which is the time it takes for all agents to know the rumor. We focus on the sparse case (below the percolation point @math ) where, with high probability, no connected component in @math has more than a logarithmic number of agents and the broadcast time is dominated by the time it takes for many independent random walks to meet each other. Quite surprisingly, we show that for a system below the percolation point the broadcast time does not depend on the relation between the mobility speed and the transmission radius. In fact, we prove that @math for any @math , even when the transmission range is significantly larger than the mobility range in one step, giving a tight characterization up to logarithmic factors. Our result complements a recent result of (SODA 2011) who showed that above the percolation point the broadcast time is polylogarithmic in @math .
|
Recent work by @cite_22 studies a process in which agents follow independent Brownian motions in @math . They investigate several properties of the system, such as detection, coverage and percolation times, and characterize them as functions of the spatial density of the agents, which is assumed to be greater than the percolation point. Leveraging on these results, they show that the broadcast time of a message is polylogarithmic in the number of agents, under the assumption that a message spreads within a connected component of the communication graph instantaneously, before the graph is altered by agents' motion.
|
{
"cite_N": [
"@cite_22"
],
"mid": [
"2158348301"
],
"abstract": [
"Static wireless networks are by now quite well understood mathematically through the random geometric graph model. By contrast, there are relatively few rigorous results on the practically important case of mobile networks. In this paper we consider a natural extension of the random geometric graph model to the mobile setting by allowing nodes to move in space according to Brownian motion. We study three fundamental questions in this model: detection (the time until a given target point---which may be either fixed or moving---is detected by the network), coverage (the time until all points inside a finite box are detected by the network), and percolation (the time until a given node is able to communicate with the giant component of the network). We derive precise asymptotics for these problems by combining ideas from stochastic geometry, coupling and multi-scale analysis. We also give an application of our results to analyze the time to broadcast a message in a mobile network."
]
}
|
1101.2819
|
2949417504
|
Differential privacy is a promising approach to privacy preserving data analysis with a well-developed theory for functions. Despite recent work on implementing systems that aim to provide differential privacy, the problem of formally verifying that these systems have differential privacy has not been adequately addressed. This paper presents the first results towards automated verification of source code for differentially private interactive systems. We develop a formal probabilistic automaton model of differential privacy for systems by adapting prior work on differential privacy for functions. The main technical result of the paper is a sound proof technique based on a form of probabilistic bisimulation relation for proving that a system modeled as a probabilistic automaton satisfies differential privacy. The novelty lies in the way we track quantitative privacy leakage bounds using a relation family instead of a single relation. We illustrate the proof technique on a representative automaton motivated by PINQ, an implemented system that is intended to provide differential privacy. To make our proof technique easier to apply to realistic systems, we prove a form of refinement theorem and apply it to show that a refinement of the abstract PINQ automaton also satisfies our differential privacy definition. Finally, we begin the process of automating our proof technique by providing an algorithm for mechanically checking a restricted class of relations from the proof technique.
|
The definition of differential privacy may be seen as largely a simplification of the previously defined notion of @math - @cite_18 , which explicitly models interaction between a private system and the data examiner as in our definition of differential noninterference. Our definition, however, is cast in the framework of probabilistic automata rather than Turing machines. This supports having structured models that are capable of highlighting issues arising from the bounded memory of actual computers. Furthermore, we deal with non-termination using prefixes allowing us to leverage previous work on formal methods for automata (e.g., @cite_40 ).
|
{
"cite_N": [
"@cite_40",
"@cite_18"
],
"mid": [
"2037507558",
"2951011752"
],
"abstract": [
"Probabilistic automata (PAs) constitute a general framework for modeling and analyzing discrete event systems that exhibit both nondeterministic and probabilistic behavior, such as distributed algorithms and network protocols. The behavior of PAs is commonly defined using schedulers (also called adversaries or strategies), which resolve all nondeterministic choices based on past history. From the resulting purely probabilistic structures, trace distributions can be extracted, whose intent is to capture the observable behavior of a PA. However, when PAs are composed via an (asynchronous) parallel composition operator, a global scheduler may establish strong correlations between the behavior of system components and, for example, resolve nondeterministic choices in one PA based on the outcome of probabilistic choices in the other. It is well known that, as a result of this, the (linear-time) trace distribution precongruence is not compositional for PAs. In his 1995 Ph.D. thesis, Segala has shown that the (branching-time) probabilistic simulation preorder is compositional for PAs. In this paper, we establish that the simulation preorder is, in fact, the coarsest refinement of the trace distribution preorder that is compositional. We prove our characterization result by providing (1) a context of a given PA @math , called the tester, which may announce the state of @math to the outside world, and (2) a specific global scheduler, called the observer, which ensures that the state information that is announced is actually correct. Now when another PA @math is composed with the tester, it may generate the same external behavior as the observer only when it is able to simulate @math in the sense that whenever @math goes to some state @math , @math can go to a corresponding state @math , from which it may generate the same external behavior. Our result shows that probabilistic contexts together with global schedulers are able to exhibit the branching structure of PAs.",
"We present an approach to differentially private computation in which one does not scale up the magnitude of noise for challenging queries, but rather scales down the contributions of challenging records. While scaling down all records uniformly is equivalent to scaling up the noise magnitude, we show that scaling records non-uniformly can result in substantially higher accuracy by bypassing the worst-case requirements of differential privacy for the noise magnitudes. This paper details the data analysis platform wPINQ, which generalizes the Privacy Integrated Query (PINQ) to weighted datasets. Using a few simple operators (including a non-uniformly scaling Join operator) wPINQ can reproduce (and improve) several recent results on graph analysis and introduce new generalizations (e.g., counting triangles with given degrees). We also show how to integrate probabilistic inference techniques to synthesize datasets respecting more complicated (and less easily interpreted) measurements."
]
}
|
1101.2819
|
2949417504
|
Differential privacy is a promising approach to privacy preserving data analysis with a well-developed theory for functions. Despite recent work on implementing systems that aim to provide differential privacy, the problem of formally verifying that these systems have differential privacy has not been adequately addressed. This paper presents the first results towards automated verification of source code for differentially private interactive systems. We develop a formal probabilistic automaton model of differential privacy for systems by adapting prior work on differential privacy for functions. The main technical result of the paper is a sound proof technique based on a form of probabilistic bisimulation relation for proving that a system modeled as a probabilistic automaton satisfies differential privacy. The novelty lies in the way we track quantitative privacy leakage bounds using a relation family instead of a single relation. We illustrate the proof technique on a representative automaton motivated by PINQ, an implemented system that is intended to provide differential privacy. To make our proof technique easier to apply to realistic systems, we prove a form of refinement theorem and apply it to show that a refinement of the abstract PINQ automaton also satisfies our differential privacy definition. Finally, we begin the process of automating our proof technique by providing an algorithm for mechanically checking a restricted class of relations from the proof technique.
|
Much work has been done on decision algorithms for probabilistic simulation and bisimulation @cite_7 @cite_5 @cite_29 @cite_11 . Particularly relevant are the works of Baier and Hermans @cite_7 , and Cattani and Segala @cite_11 on decision algorithms for weak bisimulations. Since our unwinding relations keep track of an error bound in the form of indices in a relation family, the methods of these papers to generate relations do not readily apply to our setting. We limit ourselves to checking if a given relation family is an unwinding family rather than generating one. Extending these prior works to our setting remains as future work.
|
{
"cite_N": [
"@cite_5",
"@cite_29",
"@cite_7",
"@cite_11"
],
"mid": [
"1963619740",
"1810681804",
"2006889545",
""
],
"abstract": [
"This paper deals with probabilistic and nondeterministic processes represented by a variant of labeled transition systems where any outgoing transition of a state s is augmented with probabilities for the possible successor states. Our main contributions are algorithms for computing this bisimulation equivalence classes as introduced by Larsen and Skou (1996, Inform. and Comput.99, 1?28), and the simulation preorder a la Segala and Lynch (1995, Nordic J. Comput.2, 250?273). The algorithm for deciding bisimilarity is based on a variant of the traditional partitioning technique and runs in time O(mn(logm+logn)) where m is the number of transitions and n the number of states. The main idea for computing the simulation preorder is the reduction to maximum flow problems in suitable networks. Using the method of Cheriyan, Hagerup, and Mehlhorn, (1990, Lecture Notes in Computer Science, Vol. 443, pp. 235?248, Springer-Verlag, Berlin) for computing the maximum flow, the algorithm runs in time O((mn6+m2n3) logn). Moreover, we show that the network-based technique is also applicable to compute the simulation-like relation of Jonsson and Larsen (1991, “Proc. LICS'91” pp. 266?277) in fully probabilistic systems (a variant of ordinary labeled transition systems where the nondeterminism is totally resolved by probabilistic choices).",
"In this paper, we introduce weak bisimulation in the framework of Labeled Concurrent Markov Chains, that is, probabilistic transition systems which exhibit both probabilistic and nondeterministic behavior. By resolving the nondeterminism present, these models can be decomposed into a possibly infinite number of computation trees. We show that in order to compute weak bisimulation it is sufficient to restrict attention to only a finite number of these computations. Finally, we present an algorithm for deciding weak bisimulation which has polynomial-time complexity in the number of states of the transition system.",
"This paper considers a weak simulation preorder for Markov chains that allows for stuttering. Despite the second-order quantification in its definition, we present a polynomial-time algorithm to compute the weak simulation preorder of a finite Markov chain.",
""
]
}
|
1101.3085
|
1960249932
|
Since the information available is fundamental for our perceptions and opinions, we are interested in understanding the conditions allowing for a good information to be disseminated. This paper explores opinion dynamics by means of multi-agent based simulations when agents get informed by different sources of information. The scenario implemented includes three main streams of information acquisition, differing in both the contents and the perceived reliability of the messages spread. Agents' internal opinion is updated either by accessing one of the information sources, namely media and experts, or by exchanging information with one another. They are also endowed with cognitive mechanisms to accept, reject or partially consider the acquired information. We expect that peer-to--peer communication and reliable information sources are able both to reduce biased perceptions and to inhibit information cheating, possibly performed by the media as stated by the agenda-setting theory. In the paper, after having shortly presented both the hypotheses and the model, the simulation design will be specified and results will be discussed with respect to the hypotheses. Some considerations and ideas for future studies will conclude the paper.
|
The most popular model applied to the aggregation of opinions is the bounded confidence model, presented in @cite_14 .
|
{
"cite_N": [
"@cite_14"
],
"mid": [
"2083689991"
],
"abstract": [
"We present a model of opinion dynamics in which agents adjust continuous opinions as a result of random binary encounters whenever their difference in opinion is below a given threshold. High thresholds yield convergence of opinions towards an average opinion, whereas low thresholds result in several opinion clusters: members of the same cluster share the same opinion but are no longer influenced by members of other clusters."
]
}
|
1101.3085
|
1960249932
|
Since the information available is fundamental for our perceptions and opinions, we are interested in understanding the conditions allowing for a good information to be disseminated. This paper explores opinion dynamics by means of multi-agent based simulations when agents get informed by different sources of information. The scenario implemented includes three main streams of information acquisition, differing in both the contents and the perceived reliability of the messages spread. Agents' internal opinion is updated either by accessing one of the information sources, namely media and experts, or by exchanging information with one another. They are also endowed with cognitive mechanisms to accept, reject or partially consider the acquired information. We expect that peer-to--peer communication and reliable information sources are able both to reduce biased perceptions and to inhibit information cheating, possibly performed by the media as stated by the agenda-setting theory. In the paper, after having shortly presented both the hypotheses and the model, the simulation design will be specified and results will be discussed with respect to the hypotheses. Some considerations and ideas for future studies will conclude the paper.
|
Much like previous studies, in this paper agents exchanging information are modeled as likely to adjust their opinions only if the preceding and the received information are close enough to each other. Such aspect is modeled by introducing a real number @math , which stands for tolerance or uncertainty ( @cite_16 ) such that an agent with opinion @math interacts only with agents whose opinions is in the interval @math .
|
{
"cite_N": [
"@cite_16"
],
"mid": [
"2058105398"
],
"abstract": [
"Statistical physics has proven to be a fruitful framework to describe phenomena outside the realm of traditional physics. Recent years have witnessed an attempt by physicists to study collective phenomena emerging from the interactions of individuals as elementary units in social structures. A wide list of topics are reviewed ranging from opinion and cultural and language dynamics to crowd behavior, hierarchy formation, human dynamics, and social spreading. The connections between these problems and other, more traditional, topics of statistical physics are highlighted. Comparison of model results with empirical data from social systems are also emphasized."
]
}
|
1101.3393
|
2951072142
|
For realistic scale-free networks, we investigate the traffic properties of stochastic routing inspired by a zero-range process known in statistical physics. By parameters @math and @math , this model controls degree-dependent hopping of packets and forwarding of packets with higher performance at more busy nodes. Through a theoretical analysis and numerical simulations, we derive the condition for the concentration of packets at a few hubs. In particular, we show that the optimal @math and @math are involved in the trade-off between a detour path for @math ; In the low-performance regime at a small @math , the wandering path for @math and @math is small, neither the wandering long path with short wait trapped at nodes ( @math ), nor the short hopping path with long wait trapped at hubs ( @math ) is advisable. A uniformly random walk ( @math ) yields slightly better performance. We also discuss the congestion phenomena in a more complicated situation with packet generation at each time step.
|
In the typical models, a forwarding node @math is chosen with probability either @math @cite_11 @cite_3 @cite_21 or @math @cite_13 . Here, @math and @math are real parameters, @math and @math denote the degree and the dynamically-occupied queue length by packets at node @math in the connected neighbors @math to the resident node @math of the packet. These methods are not based on a random walk (selecting a forwarding node uniformly at random among the neighbors), but on the extensions (including the uniformly random one at @math ) called preferential and congestion-aware walks, respectively. Note that @math leads to a short path passing through hubs, and that @math and @math lead to the avoidance of hubs and congested nodes with large @math . In stochastic routing methods, instead of using the shortest path, the optimal values @math and @math for maximizing the generation rate of packets in a free-flow regime have been obtained by numerical simulations @cite_3 @cite_21 . A correlation between the congestion at node level and a betweenness centrality measure was suggested @cite_13 .
|
{
"cite_N": [
"@cite_21",
"@cite_13",
"@cite_3",
"@cite_11"
],
"mid": [
"2028069703",
"2044643700",
"1999261651",
"1560056020"
],
"abstract": [
"We propose a routing strategy to improve the transportation efficiency on complex networks. Instead of using the routing strategy for shortest path, we give a generalized routing algorithm to find the so-called efficient path, which considers the possible congestion in the nodes along actual paths. Since the nodes with the largest degree are very susceptible to traffic congestion, an effective way to improve traffic and control congestion, as our strategy, can be redistributing traffic load in central nodes to other noncentral nodes. Simulation results indicate that the network capability in processing traffic is improved more than 10 times by optimizing the efficient path, which is in good agreement with the analysis. DOI: 10.1103 PhysRevE.73.046108 PACS numbers: 89.75.Hc Since the seminal work on scale-free networks by Barabasi and Albert BA model1 and on the small-world phenomenon by Watts and Strogatz 2, the structure and dynamics of complex networks have recently attracted a tremendous amount of interest and attention from the physics community see the review papers 3‐5 and references therein. The increasing importance of large communication networks such as the Internet 6, upon which our society survives, calls for the need for high efficiency in handling and delivering information. In this light, to find optimal strategies for traffic routing is one of the important issues we have to address. There have been many previous studies to understand and control traffic congestion on networks, with a basic assumption that the network has a homogeneous structure 7‐11. However, many real networks display both scale-free and small-world features, and thus it is of great interest to study the effect of network topology on traffic flow and the effect of traffic on network evolution. present a formalism that can cope simultaneously with the searching and traffic dynamics in parallel transportation systems 12. This formalism can be used to optimize network structure under a local search algorithm, while to obtain the formalism one should know the global information of the whole networks. Holme and Kim provide an in-depth analysis on the vertex edge overload cascading breakdowns based on evolving networks, and suggest a method to avoid",
"We present a study of transport on complex networks with routing based on local information. Particles hop from one node of the network to another according to a set of routing rules with different degrees of congestion awareness, ranging from random diffusion to rigid congestion-gradient driven flow. Each node can be either source or destination for particles and all nodes have the same routing capacity, which are features of ad-hoc wireless networks. It is shown that the transport capacity increases when a small amount of congestion awareness is present in the routing rules, and that it then decreases as the routing rules become too rigid when the flow becomes strictly congestion-gradient driven. Therefore, an optimum value of the congestion awareness exists in the routing rules. It is also shown that, in the limit of a large number of nodes, networks using routing based on local information jam at any nonzero load. Finally, we study the correlation between congestion at node level and a betweenness centrality measure.",
"",
"It is just amazing that both of the mean hitting time and the cover time of a random walk on a finite graph, in which the vertex visited next is selected from the adjacent vertices at random with the same probability, are bounded by O(n3) for any undirected graph with order n, despite of the lack of global topological information. Thus a natural guess is that a better transition matrix is designable if more topological information is available. For any undirected connected graph G = (V,E), let P(β) = (puvβ)u,v∈V be a transition matrix defined by puvβ = exp [-βU(u, v)] Σw∈N(u) exp [-βU(u, w)] for u∈V, v∈N(u), where β is a real number, N(u) is the set of vertices adjacent to a vertex u, deg(u) = |N(u)|, and U(., .) is a potential function defined as U(u, v) = log (max deg(u), deg(v) ) for u∈V, v∈N(u). In this paper, we show that for any undirected graph with order n, the cover time and the mean hitting time with respect to P(1) are bounded by O(n2 log n) and O(n2), respectively. We further show that P(1) is best possible with respect to the mean hitting time, in the sense that the mean hitting time of a path graph of order n, with respect to any transition matrix, is Ω(n2)."
]
}
|
1101.3393
|
2951072142
|
For realistic scale-free networks, we investigate the traffic properties of stochastic routing inspired by a zero-range process known in statistical physics. By parameters @math and @math , this model controls degree-dependent hopping of packets and forwarding of packets with higher performance at more busy nodes. Through a theoretical analysis and numerical simulations, we derive the condition for the concentration of packets at a few hubs. In particular, we show that the optimal @math and @math are involved in the trade-off between a detour path for @math ; In the low-performance regime at a small @math , the wandering path for @math and @math is small, neither the wandering long path with short wait trapped at nodes ( @math ), nor the short hopping path with long wait trapped at hubs ( @math ) is advisable. A uniformly random walk ( @math ) yields slightly better performance. We also discuss the congestion phenomena in a more complicated situation with packet generation at each time step.
|
Other routing schemes @cite_20 @cite_18 @cite_17 have also been considered, taking into account lengths of both the routing path and of the queue. In a deterministic model @cite_18 , a forwarding node @math is chosen among neighbors @math by minimizing the quantity @math with a weight @math , @math denoting the distance from @math to the terminal node. Since we must solve the optimization problems, these models @cite_21 @cite_18 are not suitable for wireless or ad hoc communication networks. Thus, stochastic routing methods using only local information are potentially promising. In a stochastic model @cite_17 , @math is chosen at random, and a packet at the top of its queue is sent with probability @math or refused with probability @math as a nondecreasing function of the queue length @math . This model is simplified by the assumption of a constant arrival rate of packets, for analyzing the critical point of traffic congestion in a mean-field equation @cite_17 .
|
{
"cite_N": [
"@cite_18",
"@cite_21",
"@cite_20",
"@cite_17"
],
"mid": [
"1994134387",
"2028069703",
"2151081147",
"2033017415"
],
"abstract": [
"We numerically investigate jamming transitions in complex heterogeneous networks. Inspired by Internet routing protocols, we study a general model that incorporates local traffic information through a tunable parameter. The results show that whether the transition from a low-traffic regime to a congested phase is of first- or second-order type is determined by the protocol at work. The microscopic dynamics reveals that these two radically different behaviors are due to the way in which traffic jams propagate through the network. Our results are discussed in the context of Internet dynamics and other transport processes that take place on complex networks and provide insights for the design of routing policies based on traffic awareness in communication systems.",
"We propose a routing strategy to improve the transportation efficiency on complex networks. Instead of using the routing strategy for shortest path, we give a generalized routing algorithm to find the so-called efficient path, which considers the possible congestion in the nodes along actual paths. Since the nodes with the largest degree are very susceptible to traffic congestion, an effective way to improve traffic and control congestion, as our strategy, can be redistributing traffic load in central nodes to other noncentral nodes. Simulation results indicate that the network capability in processing traffic is improved more than 10 times by optimizing the efficient path, which is in good agreement with the analysis. DOI: 10.1103 PhysRevE.73.046108 PACS numbers: 89.75.Hc Since the seminal work on scale-free networks by Barabasi and Albert BA model1 and on the small-world phenomenon by Watts and Strogatz 2, the structure and dynamics of complex networks have recently attracted a tremendous amount of interest and attention from the physics community see the review papers 3‐5 and references therein. The increasing importance of large communication networks such as the Internet 6, upon which our society survives, calls for the need for high efficiency in handling and delivering information. In this light, to find optimal strategies for traffic routing is one of the important issues we have to address. There have been many previous studies to understand and control traffic congestion on networks, with a basic assumption that the network has a homogeneous structure 7‐11. However, many real networks display both scale-free and small-world features, and thus it is of great interest to study the effect of network topology on traffic flow and the effect of traffic on network evolution. present a formalism that can cope simultaneously with the searching and traffic dynamics in parallel transportation systems 12. This formalism can be used to optimize network structure under a local search algorithm, while to obtain the formalism one should know the global information of the whole networks. Holme and Kim provide an in-depth analysis on the vertex edge overload cascading breakdowns based on evolving networks, and suggest a method to avoid",
"networks. The strategy is governed by a single parameter. Simulation results show that maximizing the network capacity and reducing the packet travel time can generate an optimal parameter value. Compared with the strategy of adopting exclusive local static information, the new strategy shows its advantages in improving the efficiency of the system. The detailed analysis of the mixing strategy is provided for explaining its effects on traffic routing. The work indicates that effectively utilizing the larger degree nodes plays a key role in scalefree traffic systems.",
"We define a minimal model of traffic flows in complex networks in order to study the trade-off between topological-based and traffic-based routing strategies. The resulting collective behavior is obtained analytically for an ensemble of uncorrelated networks and summarized in a rich phase diagram presenting second-order as well as first-order phase transitions between a free-flow phase and a congested phase. We find that traffic control improves global performance, enlarging the free-flow region in parameter space only in heterogeneous networks. Traffic control introduces nonlinear effects and, beyond a critical strength, may trigger the appearance of a congested phase in a discontinuous manner. The model also reproduces the crossover in the scaling of traffic fluctuations empirically observed on the Internet."
]
}
|
1101.3393
|
2951072142
|
For realistic scale-free networks, we investigate the traffic properties of stochastic routing inspired by a zero-range process known in statistical physics. By parameters @math and @math , this model controls degree-dependent hopping of packets and forwarding of packets with higher performance at more busy nodes. Through a theoretical analysis and numerical simulations, we derive the condition for the concentration of packets at a few hubs. In particular, we show that the optimal @math and @math are involved in the trade-off between a detour path for @math ; In the low-performance regime at a small @math , the wandering path for @math and @math is small, neither the wandering long path with short wait trapped at nodes ( @math ), nor the short hopping path with long wait trapped at hubs ( @math ) is advisable. A uniformly random walk ( @math ) yields slightly better performance. We also discuss the congestion phenomena in a more complicated situation with packet generation at each time step.
|
With a different processing power at each node @cite_3 , it has also been considered that the node capacity @math is proportional to its degree @math , therefore more packets jump out from a node as the degree becomes larger. On the other hand, in the ZRP @cite_9 @cite_8 @cite_5 , the forwarding capacity at a node depends on the number of @math defined as a queue length occupied by packets at node @math . The ZRP is a solvable theoretical model for traffic dynamics. In particular, in the ZRP with a random walk at @math , the phase transition between condensation of packets at hubs and uncondensation on SF networks has been derived @cite_9 @cite_8 . For @math , a similar phase transition has been analyzed in the mean-field approximation @cite_4 .
|
{
"cite_N": [
"@cite_4",
"@cite_8",
"@cite_9",
"@cite_3",
"@cite_5"
],
"mid": [
"1981396137",
"1983505675",
"2002565409",
"1999261651",
"1599687342"
],
"abstract": [
"We study the condensation phenomenon in a zero range process on weighted scale-free networks in order to show how the weighted transport influences the particle condensation. Instead of the approach of grand canonical ensemble which is generally used in a zero range process, we introduce an alternate approach of the mean-field equations to study the dynamics of particle transport. We find that the condensation on the scale-free network is easier to occur in the case of weighted transport than in the case of weight-free networks. In the weighted transport, especially, a dynamical condensation is even possible for the case of no interaction among particles, which is impossible in the case of weight-free networks.",
"We study the condensation phenomenon in a zero-range process on scale-free networks. We show that the stationary state property depends only on the degree distribution of underlying networks. The model displays a stationary state phase transition between a condensed phase and an uncondensed phase, and the phase diagram is obtained analytically. As for the dynamical property, we find that the relaxation dynamics depends on the global structure of underlying networks. The relaxation time follows the power law @math with the network size @math in the condensed phase. The dynamic exponent @math is found to take a different value depending on whether underlying networks have a tree structure or not.",
"We study a zero range process on scale-free networks in order to investigate how network structure influences particle dynamics. The zero range process is defined with the particle jumping rate function @math . We show analytically that a complete condensation occurs when @math where @math is the degree distribution exponent of the underlying networks. In the complete condensation, those nodes whose degree is higher than a threshold are occupied by macroscopic numbers of particles, while the other nodes are occupied by negligible numbers of particles. We also show numerically that the relaxation time follows a power-law scaling @math with the network size @math and a dynamic exponent @math in the condensed phase.",
"",
"motion of a random walker on heterogeneous complex networks. We find that the random walker is attracted toward nodes with larger degree and that the random walk motion is asymmetric. The asymmetry can be quantified with the random walk centrality, the matrix formulation of which is presented. As an interacting system, we consider the zero-range process on complex networks. We find that a structural heterogeneity can lead to a condensation phenomenon. These studies show that structural heterogeneity plays an important role in understanding the properties of dynamical systems."
]
}
|
1101.3393
|
2951072142
|
For realistic scale-free networks, we investigate the traffic properties of stochastic routing inspired by a zero-range process known in statistical physics. By parameters @math and @math , this model controls degree-dependent hopping of packets and forwarding of packets with higher performance at more busy nodes. Through a theoretical analysis and numerical simulations, we derive the condition for the concentration of packets at a few hubs. In particular, we show that the optimal @math and @math are involved in the trade-off between a detour path for @math ; In the low-performance regime at a small @math , the wandering path for @math and @math is small, neither the wandering long path with short wait trapped at nodes ( @math ), nor the short hopping path with long wait trapped at hubs ( @math ) is advisable. A uniformly random walk ( @math ) yields slightly better performance. We also discuss the congestion phenomena in a more complicated situation with packet generation at each time step.
|
In the next two sections, based on a straightforward approach introduced in Refs. @cite_9 @cite_8 , we derive the phase transition in the ZRP on SF networks with the degree-dependent hopping rule for both @math and @math , inspired by preferential @cite_11 @cite_3 @cite_21 and congestion-aware walks. Although the rule is not identical to the congestion-aware routing scheme @cite_13 based on occupied queue length @math , @math corresponds to avoiding hubs with large degrees, where many packets tend to be concentrated. Furthermore, we study the traffic properties in the case with neighbor search into a terminal node at the last step.
|
{
"cite_N": [
"@cite_8",
"@cite_9",
"@cite_21",
"@cite_3",
"@cite_13",
"@cite_11"
],
"mid": [
"1983505675",
"2002565409",
"2028069703",
"1999261651",
"2044643700",
"1560056020"
],
"abstract": [
"We study the condensation phenomenon in a zero-range process on scale-free networks. We show that the stationary state property depends only on the degree distribution of underlying networks. The model displays a stationary state phase transition between a condensed phase and an uncondensed phase, and the phase diagram is obtained analytically. As for the dynamical property, we find that the relaxation dynamics depends on the global structure of underlying networks. The relaxation time follows the power law @math with the network size @math in the condensed phase. The dynamic exponent @math is found to take a different value depending on whether underlying networks have a tree structure or not.",
"We study a zero range process on scale-free networks in order to investigate how network structure influences particle dynamics. The zero range process is defined with the particle jumping rate function @math . We show analytically that a complete condensation occurs when @math where @math is the degree distribution exponent of the underlying networks. In the complete condensation, those nodes whose degree is higher than a threshold are occupied by macroscopic numbers of particles, while the other nodes are occupied by negligible numbers of particles. We also show numerically that the relaxation time follows a power-law scaling @math with the network size @math and a dynamic exponent @math in the condensed phase.",
"We propose a routing strategy to improve the transportation efficiency on complex networks. Instead of using the routing strategy for shortest path, we give a generalized routing algorithm to find the so-called efficient path, which considers the possible congestion in the nodes along actual paths. Since the nodes with the largest degree are very susceptible to traffic congestion, an effective way to improve traffic and control congestion, as our strategy, can be redistributing traffic load in central nodes to other noncentral nodes. Simulation results indicate that the network capability in processing traffic is improved more than 10 times by optimizing the efficient path, which is in good agreement with the analysis. DOI: 10.1103 PhysRevE.73.046108 PACS numbers: 89.75.Hc Since the seminal work on scale-free networks by Barabasi and Albert BA model1 and on the small-world phenomenon by Watts and Strogatz 2, the structure and dynamics of complex networks have recently attracted a tremendous amount of interest and attention from the physics community see the review papers 3‐5 and references therein. The increasing importance of large communication networks such as the Internet 6, upon which our society survives, calls for the need for high efficiency in handling and delivering information. In this light, to find optimal strategies for traffic routing is one of the important issues we have to address. There have been many previous studies to understand and control traffic congestion on networks, with a basic assumption that the network has a homogeneous structure 7‐11. However, many real networks display both scale-free and small-world features, and thus it is of great interest to study the effect of network topology on traffic flow and the effect of traffic on network evolution. present a formalism that can cope simultaneously with the searching and traffic dynamics in parallel transportation systems 12. This formalism can be used to optimize network structure under a local search algorithm, while to obtain the formalism one should know the global information of the whole networks. Holme and Kim provide an in-depth analysis on the vertex edge overload cascading breakdowns based on evolving networks, and suggest a method to avoid",
"",
"We present a study of transport on complex networks with routing based on local information. Particles hop from one node of the network to another according to a set of routing rules with different degrees of congestion awareness, ranging from random diffusion to rigid congestion-gradient driven flow. Each node can be either source or destination for particles and all nodes have the same routing capacity, which are features of ad-hoc wireless networks. It is shown that the transport capacity increases when a small amount of congestion awareness is present in the routing rules, and that it then decreases as the routing rules become too rigid when the flow becomes strictly congestion-gradient driven. Therefore, an optimum value of the congestion awareness exists in the routing rules. It is also shown that, in the limit of a large number of nodes, networks using routing based on local information jam at any nonzero load. Finally, we study the correlation between congestion at node level and a betweenness centrality measure.",
"It is just amazing that both of the mean hitting time and the cover time of a random walk on a finite graph, in which the vertex visited next is selected from the adjacent vertices at random with the same probability, are bounded by O(n3) for any undirected graph with order n, despite of the lack of global topological information. Thus a natural guess is that a better transition matrix is designable if more topological information is available. For any undirected connected graph G = (V,E), let P(β) = (puvβ)u,v∈V be a transition matrix defined by puvβ = exp [-βU(u, v)] Σw∈N(u) exp [-βU(u, w)] for u∈V, v∈N(u), where β is a real number, N(u) is the set of vertices adjacent to a vertex u, deg(u) = |N(u)|, and U(., .) is a potential function defined as U(u, v) = log (max deg(u), deg(v) ) for u∈V, v∈N(u). In this paper, we show that for any undirected graph with order n, the cover time and the mean hitting time with respect to P(1) are bounded by O(n2 log n) and O(n2), respectively. We further show that P(1) is best possible with respect to the mean hitting time, in the sense that the mean hitting time of a path graph of order n, with respect to any transition matrix, is Ω(n2)."
]
}
|
1101.2713
|
2026035956
|
In this paper, we study a simple correlation-based strategy for estimating the unknown delay and amplitude of a signal based on a small number of noisy, randomly chosen frequency-domain samples. We model the output of this “compressive matched filter” as a random process whose mean equals the scaled, shifted autocorrelation function of the template signal. Using tools from the theory of empirical processes, we prove that the expected maximum deviation of this process from its mean decreases sharply as the number of measurements increases, and we also derive a probabilistic tail bound on the maximum deviation. Putting all of this together, we bound the minimum number of measurements required to guarantee that the empirical maximum of this random process occurs sufficiently close to the true peak of its mean function. We conclude that for broad classes of signals, this compressive matched filter will successfully estimate the unknown delay (with high probability and within a prescribed tolerance) using a number of random frequency-domain samples that scales inversely with the signal-to-noise ratio and only logarithmically in the observation bandwidth and the possible range of delays.
|
To the best of our knowledge, our framework for studying the compressive matched filter is novel. Prior statistical analysis for compressive inference problems has focused specifically on problems of signal detection or classification from a finite model set @cite_6 @cite_20 @cite_0 or employed a geometric point-of-view based on a stable embedding of signal family from an original finite-dimensional signal space into a lower-dimensional measurement space @cite_11 @cite_3 . Our work takes a substantially different approach, considering the inference of a continuous-valued shift parameter from a continuous-time received signal, and more thoroughly characterizing the statistics of the problem using the language and tools of empirical processes.
|
{
"cite_N": [
"@cite_3",
"@cite_6",
"@cite_0",
"@cite_20",
"@cite_11"
],
"mid": [
"2120961178",
"2537887767",
"2130998683",
"2160172035",
"2126131432"
],
"abstract": [
"The recently introduced theory of compressive sensing enables the recovery of sparse or compressible signals from a small set of nonadaptive, linear measurements. If properly chosen, the number of measurements can be much smaller than the number of Nyquist-rate samples. Interestingly, it has been shown that random projections are a near-optimal measurement scheme. This has inspired the design of hardware systems that directly implement random measurement protocols. However, despite the intense focus of the community on signal recovery, many (if not most) signal processing problems do not require full signal recovery. In this paper, we take some first steps in the direction of solving inference problems-such as detection, classification, or estimation-and filtering problems using only compressive measurements and without ever reconstructing the signals involved. We provide theoretical bounds along with experimental results.",
"Compressive sampling (CS), also called compressed sensing, entails making observations of an unknown signal by projecting it onto random vectors. Recent theoretical results show that if the signal is sparse (or nearly sparse) in some basis, then with high probability such observations essentially encode the salient information in the signal. Further, the signal can be reconstructed from these \"random projections,\" even when the number of observations is far less than the ambient signal dimension. The provable success of CS for signal reconstruction motivates the study of its potential in other applications. This paper investigates the utility of CS projection observations for signal classification (more specifically, m-ary hypothesis testing). Theoretical error bounds are derived and verified with several simulations.",
"Compressive sampling (CS) refers to a generalized sampling paradigm in which observations are inner products between an unknown signal vector and user-specified test vectors. Among the attractive features of CS is the ability to reconstruct any sparse (or nearly sparse) signal from a relatively small number of samples, even when the observations are corrupted by additive noise. However, the potential of CS in other signal processing applications is still not fully known. This paper examines the performance of CS for the problem of signal detection. A generalized restricted isometry property (GRIP) is introduced, which guarantees that angles are preserved, in addition to the usual norm preservation, by CS. The GRIP is leveraged to derive error bounds for a CS matched filtering scheme, and to show that the scheme is robust to signal mismatch.",
"The recently introduced theory of compressed sensing (CS) enables the reconstruction or approximation of sparse or compressible signals from a small set of incoherent projections; often the number of projections can be much smaller than the number of Nyquist rate samples. In this paper, we show that the CS framework is information scalable to a wide range of statistical inference tasks. In particular, we demonstrate how CS principles can solve signal detection problems given incoherent measurements without ever reconstructing the signals involved. We specifically study the case of signal detection in strong inference and noise and propose an incoherent detection and estimation algorithm (IDEA) based on matching pursuit. The number of measurements and computations necessary for successful detection using IDEA is significantly lower than that necessary for successful reconstruction. Simulations show that IDEA is very resilient to strong interference, additive noise, and measurement quantization. When combined with random measurements, IDEA is applicable to a wide range of different signal classes",
"The theory of compressive sensing (CS) enables the reconstruction of a sparse or compressible image or signal from a small set of linear, non-adaptive (even random) projections. However, in many applications, including object and target recognition, we are ultimately interested in making a decision about an image rather than computing a reconstruction. We propose here a framework for compressive classification that operates directly on the compressive measurements without first reconstructing the image. We dub the resulting dimensionally reduced matched filter the smashed filter. The first part of the theory maps traditional maximum likelihood hypothesis testing into the compressive domain; we find that the number of measurements required for a given classification performance level does not depend on the sparsity or compressibility of the images but only on the noise level. The second part of the theory applies the generalized maximum likelihood method to deal with unknown transformations such as the translation, scale, or viewing angle of a target object. We exploit the fact the set of transformed images forms a low-dimensional, nonlinear manifold in the high-dimensional image space. We find that the number of measurements required for a given classification performance level grows linearly in the dimensionality of the manifold but only logarithmically in the number of pixels samples and image classes. Using both simulations and measurements from a new single-pixel compressive camera, we demonstrate the effectiveness of the smashed filter for target classification using very few measurements."
]
}
|
1101.2713
|
2026035956
|
In this paper, we study a simple correlation-based strategy for estimating the unknown delay and amplitude of a signal based on a small number of noisy, randomly chosen frequency-domain samples. We model the output of this “compressive matched filter” as a random process whose mean equals the scaled, shifted autocorrelation function of the template signal. Using tools from the theory of empirical processes, we prove that the expected maximum deviation of this process from its mean decreases sharply as the number of measurements increases, and we also derive a probabilistic tail bound on the maximum deviation. Putting all of this together, we bound the minimum number of measurements required to guarantee that the empirical maximum of this random process occurs sufficiently close to the true peak of its mean function. We conclude that for broad classes of signals, this compressive matched filter will successfully estimate the unknown delay (with high probability and within a prescribed tolerance) using a number of random frequency-domain samples that scales inversely with the signal-to-noise ratio and only logarithmically in the observation bandwidth and the possible range of delays.
|
As mentioned above, similar probabilistic tools have been employed in CS, but for the analysis of the sparse signal recovery problem @cite_30 @cite_26 @cite_4 @cite_2 @cite_9 . While in principle one could view the matched filter problem as that of recovering a @math -sparse signal from a dictionary @math of possible candidates, such a dictionary would have infinite size and extremely high coherence, preventing the application of most standard recovery analysis techniques. One recent work @cite_1 has formalized the matched filter problem using signal recovery principles and a union of subspaces model. However, this work is quite different from ours in that it does not theoretically study noise sensitivity and relies on a non-random sampling architecture that is carefully designed to facilitate the solution of the recovery problem. Interestingly, outside the field of CS, very similar random processes to those that we study have also arisen in the analysis of the spectral norm of random Toeplitz matrices @cite_28 .
|
{
"cite_N": [
"@cite_30",
"@cite_26",
"@cite_4",
"@cite_28",
"@cite_9",
"@cite_1",
"@cite_2"
],
"mid": [
"2055064119",
"2130345277",
"2114899213",
"2131680714",
"2963302510",
"2102701524",
"2048313089"
],
"abstract": [
"This paper improves upon best-known guarantees for exact reconstruction of a sparse signal f from a small universal sample of Fourier measurements. The method for reconstruction that has recently gained momentum in the sparse approximation theory is to relax this highly nonconvex problem to a convex problem and then solve it as a linear program. We show that there exists a set of frequencies Ω such that one can exactly reconstruct every r-sparse signal f of length n from its frequencies in Ω, using the convex relaxation, and Ω has size k(r, n) = O(r log(n)·log 2 (r) log(r logn)) = O(r log 4 n ). A random set Ω satisfies this with high probability. This estimate is optimal within the log log n and log 3 r factors. We also give a relatively short argument for a similar problem with k(r, n) ≈ r[12 + 8 log(n r)] Gaussian measurements. We use methods of geometric functional analysis and probability theory in Banach spaces, which makes our arguments quite short.",
"Compressed sensing seeks to recover a sparse vector from a small number of linear and non-adaptive measurements. While most work so far focuses on Gaussian or Bernoulli random measurements we investigate the use of partial random circulant and Toeplitz matrices in connection with recovery by 1-minization. In contrast to recent work in this direction we allow the use of an arbitrary subset of rows of a circulant and Toeplitz matrix. Our recovery result predicts that the necessary number of measurements to ensure sparse reconstruction by 1-minimization with random partial circulant or Toeplitz matrices scales linearly in the sparsity up to a log-factor in the ambient dimension. This represents a significant improvement over previous recovery results for such matrices. As a main tool for the proofs we use a new version of the non-commutative Khintchine inequality.",
"This paper considers the problem of estimating a discrete signal from its convolution with a pulse consisting of a sequence of independent and identically distributed Gaussian random variables. We derive lower bounds on the length of a random pulse needed to stably reconstruct a signal supported on [1, n]. We will show that a general signal can be stably recovered from convolution with a pulse of length m ≳ n log5 n, and a sparse signal which can be closely approximated using s ≲ n log5 n terms can be stably recovered with a pulse of length n.",
"Suppose that @math is a Toeplitz matrix whose entries come from a sequence of independent but not necessarily identically distributed random variables with mean zero. Under some additional tail conditions, we show that the spectral norm of @math is of the order @math . The same result holds for random Hankel matrices as well as other variants of random Toeplitz matrices which have been studied in the literature.",
"In the theory of compressed sensing, restricted isometry analysis has become a standard tool for studying how efficiently a measurement matrix acquires information about sparse and compressible signals. Many recovery algorithms are known to succeed when the restricted isometry constants of the sampling matrix are small. Many potential applications of compressed sensing involve a data-acquisition process that proceeds by convolution with a random pulse followed by (nonrandom) subsampling. At present, the theoretical analysis of this measurement technique is lacking. This paper demonstrates that the sth-order restricted isometry constant is small when the number m of samples satisfies m ≳ (s logn)^(3 2), where n is the length of the pulse. This bound improves on previous estimates, which exhibit quadratic scaling.",
"Time-delay estimation arises in many applications in which a multipath medium has to be identified from pulses transmitted through the channel. Previous methods for time delay recovery either operate on the analog received signal, or require sampling at the Nyquist rate of the transmitted pulse. In this paper, we develop a unified approach to time delay estimation from low-rate samples. This problem can be formulated in the broader context of sampling over an infinite union of subspaces. Although sampling over unions of subspaces has been receiving growing interest, previous results either focus on unions of finite-dimensional subspaces, or finite unions. The framework we develop here leads to perfect recovery of the multipath delays from samples of the channel output at the lowest possible rate, even in the presence of overlapping transmitted pulses, and allows for a variety of different sampling methods. The sampling rate depends only on the number of multipath components and the transmission rate, but not on the bandwidth of the probing signal. This result can be viewed as a sampling theorem over an infinite union of infinite dimensional subspaces. By properly manipulating the low-rate samples, we show that the time delays can be recovered using the well-known ESPRIT algorithm. Combining results from sampling theory with those obtained in the context of direction of arrival estimation, we develop sufficient conditions on the transmitted pulse and the sampling functions in order to ensure perfect recovery of the channel parameters at the minimal possible rate.",
"This paper considers the problem of estimating the channel response (or the Green's function) between multiple source–receiver pairs. Typically, the channel responses are estimated one-at-a-time: a single source sends out a known probe signal, the receiver measures the probe signal convolved with the channel response and the responses are recovered using deconvolution. In this paper, we show that if the channel responses are sparse and the probe signals are random, then we can significantly reduce the total amount of time required to probe the channels by activating all of the sources simultaneously. With all sources activated simultaneously, the receiver measures a superposition of all the channel responses convolved with the respective probe signals. Separating this cumulative response into individual channel responses can be posed as a linear inverse problem. We show that channel response separation is possible (and stable) even when the probing signals are relatively short in spite of the corresponding linear system of equations becoming severely underdetermined. We derive a theoretical lower bound on the length of the source signals that guarantees that this separation is possible with high probability. The bound is derived by putting the problem in the context of finding a sparse solution to an underdetermined system of equations, and then using mathematical tools from the theory of compressive sensing. Finally, we discuss some practical applications of these results, which include forward modeling for seismic imaging, channel equalization in multiple-input multiple-output communication and increasing the field-of-view in an imaging system by using coded apertures."
]
}
|
1101.2713
|
2026035956
|
In this paper, we study a simple correlation-based strategy for estimating the unknown delay and amplitude of a signal based on a small number of noisy, randomly chosen frequency-domain samples. We model the output of this “compressive matched filter” as a random process whose mean equals the scaled, shifted autocorrelation function of the template signal. Using tools from the theory of empirical processes, we prove that the expected maximum deviation of this process from its mean decreases sharply as the number of measurements increases, and we also derive a probabilistic tail bound on the maximum deviation. Putting all of this together, we bound the minimum number of measurements required to guarantee that the empirical maximum of this random process occurs sufficiently close to the true peak of its mean function. We conclude that for broad classes of signals, this compressive matched filter will successfully estimate the unknown delay (with high probability and within a prescribed tolerance) using a number of random frequency-domain samples that scales inversely with the signal-to-noise ratio and only logarithmically in the observation bandwidth and the possible range of delays.
|
The second part of this paper adapts our analysis of the compressive matched filter to the problem of estimating the frequency of a pure sinusoidal tone from a small number of random time-domain samples. The recovery of signals that are sparse in the frequency domain based on compressive measurements is a problem that has been well-studied in the CS literature, although most work in this area has been concerned with signals that can be written as trigonometric polynomials @cite_22 @cite_19 @cite_24 @cite_8 . Some techniques for recovering off-grid frequency-sparse signals have been proposed that involve windowing @cite_8 or other classical techniques from the field of spectral estimation @cite_7 , and other work has considered the more general problem of recovering continuous-time signals based on a union of subspaces model @cite_5 , but the analysis that we present is more sharply focused on the statistics of the simpler pure tone estimation problem.
|
{
"cite_N": [
"@cite_22",
"@cite_7",
"@cite_8",
"@cite_24",
"@cite_19",
"@cite_5"
],
"mid": [
"2145096794",
"2133285942",
"2141116650",
"2146509513",
"2067161429",
"2096504426"
],
"abstract": [
"This paper considers the model problem of reconstructing an object from incomplete frequency samples. Consider a discrete-time signal f spl isin C sup N and a randomly chosen set of frequencies spl Omega . Is it possible to reconstruct f from the partial knowledge of its Fourier coefficients on the set spl Omega ? A typical result of this paper is as follows. Suppose that f is a superposition of |T| spikes f(t)= spl sigma sub spl tau spl isin T f( spl tau ) spl delta (t- spl tau ) obeying |T| spl les C sub M spl middot (log N) sup -1 spl middot | spl Omega | for some constant C sub M >0. We do not know the locations of the spikes nor their amplitudes. Then with probability at least 1-O(N sup -M ), f can be reconstructed exactly as the solution to the spl lscr sub 1 minimization problem. In short, exact recovery may be obtained by solving a convex optimization problem. We give numerical values for C sub M which depend on the desired probability of success. Our result may be interpreted as a novel kind of nonlinear sampling theorem. In effect, it says that any signal made out of |T| spikes may be recovered by convex programming from almost every set of frequencies of size O(|T| spl middot logN). Moreover, this is nearly optimal in the sense that any method succeeding with probability 1-O(N sup -M ) would in general require a number of frequency samples at least proportional to |T| spl middot logN. The methodology extends to a variety of other situations and higher dimensions. For example, we show how one can reconstruct a piecewise constant (one- or two-dimensional) object from incomplete frequency samples - provided that the number of jumps (discontinuities) obeys the condition above - by minimizing other convex functionals such as the total variation of f.",
"Compressive sensing (CS) is a new approach to simultaneous sensing and compression of sparse and compressible signals based on randomized dimensionality reduction. To recover a signal from its compressive measurements, standard CS algorithms seek the sparsest signal in some discrete basis or frame that agrees with the measurements. A great many applications feature smooth or modulated signals that are frequency-sparse and can be modeled as a superposition of a small number of sinusoids; for such signals, the discrete Fourier transform (DFT) basis is a natural choice for CS recovery. Unfortunately, such signals are only sparse in the DFT domain when the sinusoid frequencies live precisely at the centers of the DFT bins; when this is not the case, CS recovery performance degrades signicantly. In this paper, we introduce the spectral CS (SCS) recovery framework for arbitrary frequencysparse signals. The key ingredients are an over-sampled DFT frame and a restricted unionof-subspaces signal model that inhibits closely spaced sinusoids. We demonstrate that SCS signicantly outperforms current state-of-the-art CS algorithms based on the DFT while providing provable bounds on the number of measurements required for stable recovery. We also leverage line spectral estimation methods (specically Thomson’s multitaper method",
"Wideband analog signals push contemporary analog-to-digital conversion (ADC) systems to their performance limits. In many applications, however, sampling at the Nyquist rate is inefficient because the signals of interest contain only a small number of significant frequencies relative to the band limit, although the locations of the frequencies may not be known a priori. For this type of sparse signal, other sampling strategies are possible. This paper describes a new type of data acquisition system, called a random demodulator, that is constructed from robust, readily available components. Let K denote the total number of frequencies in the signal, and let W denote its band limit in hertz. Simulations suggest that the random demodulator requires just O(K log(W K)) samples per second to stably reconstruct the signal. This sampling rate is exponentially lower than the Nyquist rate of W hertz. In contrast to Nyquist sampling, one must use nonlinear methods, such as convex programming, to recover the signal from the samples taken by the random demodulator. This paper provides a detailed theoretical analysis of the system's performance that supports the empirical observations.",
"This article describes a computational method, called the Fourier sampling algorithm. The algorithm takes a small number of (correlated) random samples from a signal and processes them efficiently to produce an approximation of the DFT of the signal. The algorithm offers provable guarantees on the number of samples, the running time, and the amount of storage. As we will see, these requirements are exponentially better than the FFT for some cases of interest.",
"Abstract We study the problem of reconstructing a multivariate trigonometric polynomial having only few non-zero coefficients from few random samples. Inspired by recent work of Candes, Romberg and Tao we propose to recover the polynomial by Basis Pursuit, i.e., by l 1 -minimization. In contrast to their work, where the sampling points are restricted to a grid, we model the random sampling points by a continuous uniform distribution on the cube, i.e., we allow them to have arbitrary position. Numerical experiments show that with high probability the trigonometric polynomial can be recovered exactly provided the number N of samples is high enough compared to the “sparsity”—the number of non-vanishing coefficients. However, N can be chosen small compared to the assumed maximal degree of the trigonometric polynomial. We present two theorems that explain this observation. One of them provides the analogue of the result of Candes, Romberg and Tao. The other one is a result toward an average case analysis and, unexpectedly connects to an interesting combinatorial problem concerning set partitions, which seemingly has not yet been considered before. Although our proofs follow ideas of they are simpler.",
"A traditional assumption underlying most data converters is that the signal should be sampled at a rate exceeding twice the highest frequency. This statement is based on a worst-case scenario in which the signal occupies the entire available bandwidth. In practice, many signals are sparse so that only part of the bandwidth is used. In this paper, we develop methods for low-rate sampling of continuous-time sparse signals in shift-invariant (SI) spaces, generated by m kernels with period T . We model sparsity by treating the case in which only k out of the m generators are active, however, we do not know which k are chosen. We show how to sample such signals at a rate much lower than m T, which is the minimal sampling rate without exploiting sparsity. Our approach combines ideas from analog sampling in a subspace with a recently developed block diagram that converts an infinite set of sparse equations to a finite counterpart. Using these two components we formulate our problem within the framework of finite compressed sensing (CS) and then rely on algorithms developed in that context. The distinguishing feature of our results is that in contrast to standard CS, which treats finite-length vectors, we consider sampling of analog signals for which no underlying finite-dimensional model exists. The proposed framework allows to extend much of the recent literature on CS to the analog domain."
]
}
|
1101.2713
|
2026035956
|
In this paper, we study a simple correlation-based strategy for estimating the unknown delay and amplitude of a signal based on a small number of noisy, randomly chosen frequency-domain samples. We model the output of this “compressive matched filter” as a random process whose mean equals the scaled, shifted autocorrelation function of the template signal. Using tools from the theory of empirical processes, we prove that the expected maximum deviation of this process from its mean decreases sharply as the number of measurements increases, and we also derive a probabilistic tail bound on the maximum deviation. Putting all of this together, we bound the minimum number of measurements required to guarantee that the empirical maximum of this random process occurs sufficiently close to the true peak of its mean function. We conclude that for broad classes of signals, this compressive matched filter will successfully estimate the unknown delay (with high probability and within a prescribed tolerance) using a number of random frequency-domain samples that scales inversely with the signal-to-noise ratio and only logarithmically in the observation bandwidth and the possible range of delays.
|
Finally, we would like to point out some of the differences between the tone estimation problem considered in this paper and the classical problem of estimating the power spectrum of a random process from samples at random locations (see @cite_10 @cite_27 @cite_12 @cite_29 ). In Sections and , we will show how the output of the compressive matched filter is a random process whose mean is the template autocorrelation function. This random process is completely specified by the samples we have observed, and rather than merely estimating its second-order statistics, we will be interested in establishing a uniform bound on its deviation from the template; this will allow us to conclude that it peaks at or near the correct location. It is also worth mentioning that our work differs from Rife and Boorstyn's classical analysis of the single-tone parameter estimation problem @cite_25 . Specifically, our work permits sampling below the Nyquist rate, and with high probability we provide an absolute bound on the accuracy of the frequency estimate, rather than involving the Cram ' e r-Rao bound.
|
{
"cite_N": [
"@cite_29",
"@cite_27",
"@cite_10",
"@cite_25",
"@cite_12"
],
"mid": [
"2537614271",
"1992721284",
"2142739351",
"2001188690",
"2000271552"
],
"abstract": [
"We say that a signal is randomly sampled when the samples are taken at random instants of time. The study of random sampling and randomly sampled signals is motivated both by practical and theoretical interests. The first one includes spectral analysis (estimation of spectra from a finite number of samples) and quality of service (signal reconstruction), and the second one includes statistical analysis of reconstruction methods. The present paper focuses on the computation of the (theoretical) spectrum of randomly sampled signals and on the computation of the reconstruction error. Using a point process approach, we obtain general formulas for spatial random sampling, providing powerful tools for the analysis and the processing of randomly sampled signals.",
"A class of spectral estimates of continuous-time stationary stochastic processes X(t) from a finite number of observations X(t_ n ) ^ N _ n =l taken at Poisson sampling instants t_ n is considered. The asymptotic bias and covariance of the estimates are derived, and the influence of the spectral windows and the sampling rate on the performance of the estimates is discussed. The estimates are shown to be consistent under mild smoothness conditions on the spectral density. Comparison is made with a related class of spectral estimates suggested in [15] where the number of observations is random . It is shown that the periodograms of the two classes have distinct statistics.",
"The notion of alias-free sampling is generalized to apply to random processes x(t) sampled at random times t_n ; sampling is said to be alias free relative to a family of spectra if any spectrum of the family can be recovered by a linear operation on the correlation sequence r(n) , where r(n) = E[x(l_ m+n ) x(t_m) ] . The actual sampling times t_n need not be known to effect recovery of the spectrum of x(t) . Various alternative criteria for verifying alias-free sampling are developed. It is then shown that any spectrum whatsoever can be recovered if t_n is a Poisson point process on the positive (or negative) half-axis. A second example of alias-free sampling is provided for spectra on a finite interval by periodic sampling (for t t_o or t t_o ) in which samples are randomly independently skipped (expunged), such that the average sampling rate is an arbitrarily small fraction of the Nyquist rate. A third example shows that randomly jittered sampling at the Nyquist rate is alias free. Certain related open questions are discussed. These concern the practical problems involved in estimating a spectrum from imperfectly known r(n) .",
"Estimation of the parameters of a single-frequency complex tone from a finite number of noisy discrete-time observations is discussed. The appropriate Cramer-Rao bounds and maximum-likelihood (MI.) estimation algorithms are derived. Some properties of the ML estimators are proved. The relationship of ML estimation to the discrete Fourier transform is exploited to obtain practical algorithms. The threshold effect of one algorithm is analyzed and compared to simulation results. Other simulation results verify other aspects of the analysis.",
"Abstract Let X = X(t), − ∞ φ X (λ) of φX(λ) based on the discrete-time observation X(τk), τk are considered. Asymptotic expressions for the bias and covariance of φ X (λ) are derived. A multivariate central limit theorem is established for the spectral estimators φ X (λ) . Under mild conditions, it is shown that the bias is independent of the statistics of the sampling point process τk and that there exist sampling point processes such that the asymptotic variance is uniformly smaller than that of a Poisson sampling scheme for all spectral densities φX(λ) and all frequencies λ."
]
}
|
1101.3380
|
1513522290
|
We study the scenario where the players of a classical complete information game initially share an entangled pure quantum state. Each player may perform arbitrary local operations on his own qubits, but no direct communication is allowed. In this framework, we define the concept of quantum correlated equilibrium (QCE) for both normal and extensive form games of complete information. We show that in a normal form game, any outcome distribution implementable by a QCE can also be implemented by a classical correlated equilibrium (CE). We prove that the converse is surprisingly false: we give an example of an outcome distribution of a normal form game which is implementably by a CE, yet we prove that in any attempted quantum protocol beginning with a partition of a pure quantum state, at least one of the players will have incentive to deviate. We extend our analysis to extensive form games, and find that the relation between classical and quantum correlated equilibria becomes less clear. We compare the outcome distributions implementable in our quantum model to those implementable by a classical extensive form correlated equilibrium (EFCE). For example, we show that there exists an extensive form complete information game and a distribution of outcomes which can be implemented by a QCE but not by any EFCE, in contrast to the result for normal form games. We also consider the concept of an immediate-revelation extensive form correlated equilibrium (IR-EFCE) and compare the power of IR-EFCE to EFCE and to QCE.
|
While work by @cite_15 , @cite_7 and La Mura @cite_11 have studied how quantum entanglement can aid in games of incomplete information (such as Bayesian games), we restrict our attention to games of complete information, and find that even in this framework the questions are nontrivial. Quantum solutions of classical coordination games have been studied previously, such as in @cite_7 and @cite_19 . In this paper, we look at games which have both cooperative and competitive components. Instead of analyzing the quantization'' of games (see Meyer @cite_13 ), our underlying games remain purely classical. Entanglement is used only as a device to aid in a player's decision of which strategy to play in the classical game. By keeping the underlying game classical, our model generalizes naturally from normal form to extensive form games.
|
{
"cite_N": [
"@cite_7",
"@cite_19",
"@cite_15",
"@cite_13",
"@cite_11"
],
"mid": [
"2116397277",
"2127592065",
"2028815089",
"1480296433",
"1551419266"
],
"abstract": [
"This paper investigates various aspects of the nonlocal effects that can arise when entangled quantum information is shared between two parties. A natural framework for studying nonlocality is that of cooperative games with incomplete information, where two cooperating players may share entanglement. Here, nonlocality can be quantified in terms of the values of such games. We review some examples of non-locality and show that it can profoundly affect the soundness of two-prover interactive proof systems. We then establish limits on nonlocal behavior by upper-bounding the values of several of these games. These upper bounds can be regarded as generalizations of the so-called Tsirelson inequality. We also investigate the amount of entanglement required by optimal and nearly optimal quantum strategies.",
"We present a quantum solution to coordination problems that can be implemented with existing technologies. Using the properties of entangled states, this quantum mechanism allows participants to rapidly find suitable correlated choices as an alternative to conventional approaches relying on explicit communication, prior commitment or trusted third parties. Unlike prior proposals for quantum games our approach retains the same choices as in the classical game and instead utilizes quantum entanglement as an extra resource to aid the participants in their choices. PACS: 03.67.-a; 02.50.Le; 89.65.Gh",
"",
"A quantum algorithm for an oracle problem can be understood as a quantum strategy for a player in a two-player zero-sum game in which the other player is constrained to play classically. I formalize this correspondence and give examples of games (and hence oracle problems) for which the quantum player can do better than would be possible classically. The most remarkable example is the Bernstein-Vazirani quantum search algorithm which I show creates no entanglement at any timestep.",
"Correlated equilibria are sometimes more efficient than the Nash equilibria of a game without signals. We investigate whether the availability of quantum signals in the context of a classical strategic game may allow the players to achieve even better efficiency than in any correlated equilibrium with classical signals, and find the answer to be positive."
]
}
|
1101.3380
|
1513522290
|
We study the scenario where the players of a classical complete information game initially share an entangled pure quantum state. Each player may perform arbitrary local operations on his own qubits, but no direct communication is allowed. In this framework, we define the concept of quantum correlated equilibrium (QCE) for both normal and extensive form games of complete information. We show that in a normal form game, any outcome distribution implementable by a QCE can also be implemented by a classical correlated equilibrium (CE). We prove that the converse is surprisingly false: we give an example of an outcome distribution of a normal form game which is implementably by a CE, yet we prove that in any attempted quantum protocol beginning with a partition of a pure quantum state, at least one of the players will have incentive to deviate. We extend our analysis to extensive form games, and find that the relation between classical and quantum correlated equilibria becomes less clear. We compare the outcome distributions implementable in our quantum model to those implementable by a classical extensive form correlated equilibrium (EFCE). For example, we show that there exists an extensive form complete information game and a distribution of outcomes which can be implemented by a QCE but not by any EFCE, in contrast to the result for normal form games. We also consider the concept of an immediate-revelation extensive form correlated equilibrium (IR-EFCE) and compare the power of IR-EFCE to EFCE and to QCE.
|
Since our goal is to study a mediator-free setting, it is necessary to restrict our model so that the initial shared state be pure (See Appendix ). This restriction is very significant and differs from work such as Zhang's @cite_20 which, while studying both pure and mixed initial states, limited its mention of pure states to those with a certain restricted form. Roughly speaking, the main difference is that we allow for pure states with many ancillary qubits, and these ancillary qubits can indeed affect the players' ability to gain utility by deviating. Furthermore, unlike La Mura's model @cite_11 , in our definition of equilibrium we do not restrict the local operations that a player might potentially perform to his own qubits.
|
{
"cite_N": [
"@cite_20",
"@cite_11"
],
"mid": [
"2136629712",
"1551419266"
],
"abstract": [
"We propose a simple yet rich model to extend strategic games to the quantum setting, in which we define quantum Nash and correlated equilibria and study the relations between classical and quantum equilibria. Unlike all previous work that focused on qualitative questions on specific games of very small sizes, we quantitatively address the following fundamental question for general games of growing sizes: How much \"advantage\" can playing quantum strategies provide, if any? Two measures of the advantage are studied. 1. Since game mainly is about each player trying to maximize individual payoff, a natural measure is the increase of payoff by playing quantum strategies. We consider natural mappings between classical and quantum states, and study how well those mappings preserve equilibrium properties. Among other results, we exhibit a correlated equilibrium p whose quantum superposition counterpart [EQUATION] is far from being a quantum correlated equilibrium; actually a player can increase her payoff from almost 0 to almost 1 in a [0, 1]-normalized game. We achieve this by a tensor product construction on carefully designed base cases. The result can also be interpreted as in Meyer's comparison [47]: In a state no classical player can gain, one player using quantum computers has an huge advantage than continuing to play classically. 2. Another measure is the hardness of generating correlated equilibria, for which we propose to study correlation complexity, a new complexity measure for correlation generation. We show that there are n-bit correlated equilibria which can be generated by only one EPR pair followed by local operation (without communication), but need at least log2(n) classical shared random bits plus communication. The randomized lower bound can be improved to n, the best possible, assuming (even a much weaker version of) a recent conjecture in linear algebra. We believe that the correlation complexity, as a complexity-theoretical counterpart of the celebrated Bell's inequality, has independent interest in both physics and computational complexity theory and deserves more explorations.",
"Correlated equilibria are sometimes more efficient than the Nash equilibria of a game without signals. We investigate whether the availability of quantum signals in the context of a classical strategic game may allow the players to achieve even better efficiency than in any correlated equilibrium with classical signals, and find the answer to be positive."
]
}
|
1101.3594
|
1866504874
|
This work is motivated by the problem of image mis-registration in remote sensing and we are interested in determining the resulting loss in the accuracy of pattern classification. A statistical formulation is given where we propose to use data contamination to model and understand the phenomenon of image mis-registration. This model is widely applicable to many other types of errors as well, for example, measurement errors and gross errors etc. The impact of data contamination on classification is studied under a statistical learning theoretical framework. A closed-form asymptotic bound is established for the resulting loss in classification accuracy, which is less than @math for data contamination of an amount of @math . Our bound is sharper than similar bounds in the domain adaptation literature and, unlike such bounds, it applies to classifiers with an infinite Vapnik-Chervonekis (VC) dimension. Extensive simulations have been conducted on both synthetic and real datasets under various types of data contamination, including label flipping, feature swapping and the replacement of feature values with data generated from a random source such as a Gaussian or Cauchy distribution. Our simulation results show that the bound we derive is fairly tight.
|
The bound established in @cite_31 is similar in nature which replaces the VC dimension in @cite_42 with the Rademacher complexity @cite_17 . However, there are important differences between the bound in Theorem or that in @cite_31 and ours (i.e., Theorem ).
|
{
"cite_N": [
"@cite_31",
"@cite_42",
"@cite_17"
],
"mid": [
"2953369858",
"",
"2579923771"
],
"abstract": [
"This paper addresses the general problem of domain adaptation which arises in a variety of applications where the distribution of the labeled sample available somewhat differs from that of the test data. Building on previous work by Ben- (2007), we introduce a novel distance between distributions, discrepancy distance, that is tailored to adaptation problems with arbitrary loss functions. We give Rademacher complexity bounds for estimating the discrepancy distance from finite samples for different loss functions. Using this distance, we derive novel generalization bounds for domain adaptation for a wide family of loss functions. We also present a series of novel adaptation bounds for large classes of regularization-based algorithms, including support vector machines and kernel ridge regression based on the empirical discrepancy. This motivates our analysis of the problem of minimizing the empirical discrepancy for various loss functions for which we also give novel algorithms. We report the results of preliminary experiments that demonstrate the benefits of our discrepancy minimization algorithms for domain adaptation.",
"",
"We investigate the use of certain data-dependent estimates of the complexity of a function class, called Rademacher and gaussian complexities. In a decision theoretic setting, we prove general risk bounds in terms of these complexities. We consider function classes that can be expressed as combinations of functions from basis classes and show how the Rademacher and gaussian complexities of such a function class can be bounded in terms of the complexity of the basis classes.We give examples of the application of these techniques in finding data-dependent risk bounds for decision trees, neural networks and support vector machines."
]
}
|
1101.3594
|
1866504874
|
This work is motivated by the problem of image mis-registration in remote sensing and we are interested in determining the resulting loss in the accuracy of pattern classification. A statistical formulation is given where we propose to use data contamination to model and understand the phenomenon of image mis-registration. This model is widely applicable to many other types of errors as well, for example, measurement errors and gross errors etc. The impact of data contamination on classification is studied under a statistical learning theoretical framework. A closed-form asymptotic bound is established for the resulting loss in classification accuracy, which is less than @math for data contamination of an amount of @math . Our bound is sharper than similar bounds in the domain adaptation literature and, unlike such bounds, it applies to classifiers with an infinite Vapnik-Chervonekis (VC) dimension. Extensive simulations have been conducted on both synthetic and real datasets under various types of data contamination, including label flipping, feature swapping and the replacement of feature values with data generated from a random source such as a Gaussian or Cauchy distribution. Our simulation results show that the bound we derive is fairly tight.
|
[(1)] The nature of bounds is different. The bounds in ( @cite_42 @cite_31 ) are finite sample learning generalization type of bounds while our bound is a large sample bound (i.e., asymptotic bound). [(2)] The quality of the bounds is different. The bounds in @cite_42 are union bounds that rely on the Vapnik-Chervonekis (VC) dimension @cite_39 , and are often quite loose ( @cite_31 uses the Rademacher complexity @cite_17 but still quite loose). In contrast, our bound is a sharp bound asymptotically. Assume the underlying function class has a finite VC dimension and let @math , then the bound in Theorem becomes @math , which is looser than our bound @math for small @math . Since the @math term depends on the difficulty of the underlying problem and generally does not vanish, in no way would the bounds in @cite_42 imply ours.
|
{
"cite_N": [
"@cite_31",
"@cite_42",
"@cite_17",
"@cite_39"
],
"mid": [
"2953369858",
"",
"2579923771",
"2148603752"
],
"abstract": [
"This paper addresses the general problem of domain adaptation which arises in a variety of applications where the distribution of the labeled sample available somewhat differs from that of the test data. Building on previous work by Ben- (2007), we introduce a novel distance between distributions, discrepancy distance, that is tailored to adaptation problems with arbitrary loss functions. We give Rademacher complexity bounds for estimating the discrepancy distance from finite samples for different loss functions. Using this distance, we derive novel generalization bounds for domain adaptation for a wide family of loss functions. We also present a series of novel adaptation bounds for large classes of regularization-based algorithms, including support vector machines and kernel ridge regression based on the empirical discrepancy. This motivates our analysis of the problem of minimizing the empirical discrepancy for various loss functions for which we also give novel algorithms. We report the results of preliminary experiments that demonstrate the benefits of our discrepancy minimization algorithms for domain adaptation.",
"",
"We investigate the use of certain data-dependent estimates of the complexity of a function class, called Rademacher and gaussian complexities. In a decision theoretic setting, we prove general risk bounds in terms of these complexities. We consider function classes that can be expressed as combinations of functions from basis classes and show how the Rademacher and gaussian complexities of such a function class can be bounded in terms of the complexity of the basis classes.We give examples of the application of these techniques in finding data-dependent risk bounds for decision trees, neural networks and support vector machines.",
"A comprehensive look at learning and generalization theory. The statistical theory of learning and generalization concerns the problem of choosing desired functions on the basis of empirical data. Highly applicable to a variety of computer science and robotics fields, this book offers lucid coverage of the theory as a whole. Presenting a method for determining the necessary and sufficient conditions for consistency of learning process, the author covers function estimates from small data pools, applying these estimations to real-life problems, and much more."
]
}
|
1101.2374
|
2006729141
|
Clustering in high-dimensional spaces is nowadays a recurrent problem in many scientific domains but remains a difficult task from both the clustering accuracy and the result understanding points of view. This paper presents a discriminative latent mixture (DLM) model which fits the data in a latent orthonormal discriminative subspace with an intrinsic dimension lower than the dimension of the original space. By constraining model parameters within and between groups, a family of 12 parsimonious DLM models is exhibited which allows to fit onto various situations. An estimation algorithm, called the Fisher-EM algorithm, is also proposed for estimating both the mixture parameters and the discriminative subspace. Experiments on simulated and real datasets highlight the good performance of the proposed approach as compared to existing clustering methods while providing a useful representation of the clustered data. The method is as well applied to the clustering of mass spectrometry data.
|
Clustering is a traditional statistical problem which aims to divide a set of observations @math described by @math variables into @math homogeneous groups. The problem of clustering has been widely studied for years and the reader could refer to @cite_5 @cite_45 for reviews on the clustering problem. However, the interest in clustering is still increasing since more and more scientific fields require to cluster high-dimensional data. Moreover, such a task remains very difficult since clustering methods suffer from the well-known @cite_48 . Conversely, the @cite_14 , which refers to the fact that high-dimensional data do not fit the whole observation space but live in low-dimensional subspaces, gives hope to efficiently classify high-dimensional data. This section firstly reviews the framework of model-based clustering before exposing the existing approaches to deal with the problem of high dimension in clustering.
|
{
"cite_N": [
"@cite_5",
"@cite_14",
"@cite_45",
"@cite_48"
],
"mid": [
"2011832962",
"145612104",
"1992419399",
""
],
"abstract": [
"Cluster analysis is the automated search for groups of related observations in a dataset. Most clustering done in practice is based largely on heuristic but intuitively reasonable procedures, and most clustering methods available in commercial software are also of this type. However, there is little systematic guidance associated with these methods for solving important practical questions that arise in cluster analysis, such as how many clusters are there, which clustering method should be used, and how should outliers be handled. We review a general methodology for model-based clustering that provides a principled statistical approach to these issues. We also show that this can be useful for other problems in multivariate analysis, such as discriminant analysis and multivariate density estimation. We give examples from medical diagnosis, minefield detection, cluster recovery from noisy data, and spatial density estimation. Finally, we mention limitations of the methodology and discuss recent development...",
"For the estimation of probability densities in dimensions past two, representational difficulties predominate. Experience indicates that we should investigate the locations of the modes and proceed to describe the unknown density using these as local origins. The scaling system to be employed should also be data determined. Using such a philosophy, density estimation has been successfully carried out in the three dimensional case. Color and motion can be used as enhancement devices so that estimation in dimensions past three becomes feasible.",
"Clustering is the unsupervised classification of patterns (observations, data items, or feature vectors) into groups (clusters). The clustering problem has been addressed in many contexts and by researchers in many disciplines; this reflects its broad appeal and usefulness as one of the steps in exploratory data analysis. However, clustering is a difficult problem combinatorially, and differences in assumptions and contexts in different communities has made the transfer of useful generic concepts and methodologies slow to occur. This paper presents an overviewof pattern clustering methods from a statistical pattern recognition perspective, with a goal of providing useful advice and references to fundamental concepts accessible to the broad community of clustering practitioners. We present a taxonomy of clustering techniques, and identify cross-cutting themes and recent advances. We also describe some important applications of clustering algorithms such as image segmentation, object recognition, and information retrieval.",
""
]
}
|
1101.2374
|
2006729141
|
Clustering in high-dimensional spaces is nowadays a recurrent problem in many scientific domains but remains a difficult task from both the clustering accuracy and the result understanding points of view. This paper presents a discriminative latent mixture (DLM) model which fits the data in a latent orthonormal discriminative subspace with an intrinsic dimension lower than the dimension of the original space. By constraining model parameters within and between groups, a family of 12 parsimonious DLM models is exhibited which allows to fit onto various situations. An estimation algorithm, called the Fisher-EM algorithm, is also proposed for estimating both the mixture parameters and the discriminative subspace. Experiments on simulated and real datasets highlight the good performance of the proposed approach as compared to existing clustering methods while providing a useful representation of the clustered data. The method is as well applied to the clustering of mass spectrometry data.
|
Model-based clustering, which has been widely studied by @cite_5 @cite_15 in particular, aims to partition observed data into several groups which are modeled separately. The overall population is considered as a mixture of these groups and most of time they are modeled by a Gaussian structure. By considering a dataset of @math observations @math which is divided into @math homogeneous groups and by assuming that the observations @math are independent realizations of a random vector @math , the mixture model density is then:
|
{
"cite_N": [
"@cite_5",
"@cite_15"
],
"mid": [
"2011832962",
"1579271636"
],
"abstract": [
"Cluster analysis is the automated search for groups of related observations in a dataset. Most clustering done in practice is based largely on heuristic but intuitively reasonable procedures, and most clustering methods available in commercial software are also of this type. However, there is little systematic guidance associated with these methods for solving important practical questions that arise in cluster analysis, such as how many clusters are there, which clustering method should be used, and how should outliers be handled. We review a general methodology for model-based clustering that provides a principled statistical approach to these issues. We also show that this can be useful for other problems in multivariate analysis, such as discriminant analysis and multivariate density estimation. We give examples from medical diagnosis, minefield detection, cluster recovery from noisy data, and spatial density estimation. Finally, we mention limitations of the methodology and discuss recent development...",
"The important role of finite mixture models in the statistical analysis of data is underscored by the ever-increasing rate at which articles on mixture applications appear in the statistical and ge..."
]
}
|
1101.2374
|
2006729141
|
Clustering in high-dimensional spaces is nowadays a recurrent problem in many scientific domains but remains a difficult task from both the clustering accuracy and the result understanding points of view. This paper presents a discriminative latent mixture (DLM) model which fits the data in a latent orthonormal discriminative subspace with an intrinsic dimension lower than the dimension of the original space. By constraining model parameters within and between groups, a family of 12 parsimonious DLM models is exhibited which allows to fit onto various situations. An estimation algorithm, called the Fisher-EM algorithm, is also proposed for estimating both the mixture parameters and the discriminative subspace. Experiments on simulated and real datasets highlight the good performance of the proposed approach as compared to existing clustering methods while providing a useful representation of the clustered data. The method is as well applied to the clustering of mass spectrometry data.
|
Earliest approaches proposed to overcome the problem of high dimension in clustering by reducing the dimension before using a traditional clustering method. Among the unsupervised tools of dimension reduction, PCA @cite_42 is the traditional and certainly the most used technique for dimension reduction. It aims to project the data on a lower dimensional subspace in which axes are built by maximizing the variance of the projected data. Non-linear projection methods can also be used. We refer to @cite_43 for a review on these alternative dimension reduction techniques. In a similar spirit, the generative topographic mapping (GTM) @cite_22 finds a non linear transformation of the data to map them on low-dimensional grid. An other way to reduce the dimension is to select relevant variables among the original variables. This problem has been recently considered in the clustering context by @cite_46 and @cite_34 . @cite_1 and @cite_18 , the problem of feature selection for model-based clustering is recasted as a model selection problem. However, such approaches remove variables and consequently information which could have been discriminative for the clustering task.
|
{
"cite_N": [
"@cite_18",
"@cite_22",
"@cite_42",
"@cite_1",
"@cite_43",
"@cite_46",
"@cite_34"
],
"mid": [
"2056243712",
"2107636931",
"2148694408",
"2047109555",
"1680622244",
"2157487910",
"2425246132"
],
"abstract": [
"This article is concerned with variable selection for cluster analysis. The problem is regarded as a model selection problem in the model-based cluster analysis context. A general model generalizing the model of Raftery and Dean (2006) is proposed to specify the role of each variable. This model does not need any prior assumptions about the link between the selected and discarded variables. Models are compared with BIC. Variables role is obtained through an algorithm embedding two backward stepwise variable selection algorithms for clustering and linear regression. The consistency of the resulting criterion is proved under regularity conditions. Numerical experiments on simulated datasets and a genomics application highlight the interest of the proposed variable selection procedure.",
"Latent variable models represent the probability density of data in a space of several dimensions in terms of a smaller number of latent, or hidden, variables. A familiar example is factor analysis which is based on a linear transformations between the latent space and the data space. In this paper we introduce a form of non-linear latent variable model called the Generative Topographic Mapping, for which the parameters of the model can be determined using the EM algorithm. GTM provides a principled alternative to the widely used Self-Organizing Map (SOM) of Kohonen (1982), and overcomes most of the significant limitations of the SOM. We demonstrate the performance of the GTM algorithm on a toy problem and on simulated data from flow diagnostics for a multi-phase oil pipeline.",
"Introduction * Properties of Population Principal Components * Properties of Sample Principal Components * Interpreting Principal Components: Examples * Graphical Representation of Data Using Principal Components * Choosing a Subset of Principal Components or Variables * Principal Component Analysis and Factor Analysis * Principal Components in Regression Analysis * Principal Components Used with Other Multivariate Techniques * Outlier Detection, Influential Observations and Robust Estimation * Rotation and Interpretation of Principal Components * Principal Component Analysis for Time Series and Other Non-Independent Data * Principal Component Analysis for Special Types of Data * Generalizations and Adaptations of Principal Component Analysis",
"We consider the problem of variable or feature selection for model-based clustering. The problem of comparing two nested subsets of variables is recast as a model comparison problem and addressed using approximate Bayes factors. A greedy search algorithm is proposed for finding a local optimum in model space. The resulting method selects variables (or features), the number of clusters, and the clustering model simultaneously. We applied the method to several simulated and real examples and found that removing irrelevant variables often improved performance. Compared with methods based on all of the variables, our variable selection method consistently yielded more accurate estimates of the number of groups and lower classification error rates, as well as more parsimonious clustering models and easier visualization of results.",
"Modern data analysis tools have to work on high-dimensional data, whose components are not independently distributed. High-dimensional spaces show surprising, counter-intuitive geometrical properties that have a large influence on the performances of data analysis tools. Among these properties, the concentration of the norm phenomenon results in the fact that Euclidean norms and Gaussian kernels, both commonly used in models, become inappropriate in high-dimensional spaces. This papers presents alternative distance measures and kernels, together with geometrical methods to decrease the dimension of the space. The methodology is applied to a typical time series prediction example.",
"This paper presents an unsupervised approach for feature selection and extraction in mixtures of generalized Dirichlet (GD) distributions. Our method defines a new mixture model that is able to extract independent and non-Gaussian features without loss of accuracy. The proposed model is learned using the expectation-maximization algorithm by minimizing the message length of the data set. Experimental results show the merits of the proposed methodology in the categorization of object images.",
"Clustering is a common unsupervised learning technique used to discover group structure in a set of data. While there exist many algorithms for clustering, the important issue of feature selection, that is, what attributes of the data should be used by the clustering algorithms, is rarely touched upon. Feature selection for clustering is difficult because, unlike in supervised learning, there are no class labels for the data and, thus, no obvious criteria to guide the search. Another important problem in clustering is the determination of the number of clusters, which clearly impacts and is influenced by the feature selection issue. In this paper, we propose the concept of feature saliency and introduce an expectation-maximization (EM) algorithm to estimate it, in the context of mixture-based clustering. Due to the introduction of a minimum message length model selection criterion, the saliency of irrelevant features is driven toward zero, which corresponds to performing feature selection. The criterion and algorithm are then extended to simultaneously estimate the feature saliencies and the number of clusters."
]
}
|
1101.1118
|
1802775179
|
The traditional Power Grid has been designed in a hierarchical fashion, with Energy pushed from the large scale production facilities towards the end users. But with the increasing availability of micro and medium scale generating facilities, the situation is changing. Many end users can now produce energy and share it over the Power Grid. Naturally, end users need to have incentives to do so and might want to be able to act in an open decentralized energy market. In the present work, we offer a novel analysis of the Medium and Low Voltage Power Grids of the North Netherlands using statistical tools from the Complex Network Analysis field. We use a weighted model based on actual Grid data and propose a set of statistical measures to evaluate the adequacy of the current infrastructure for a decentralized energy market. Further, we use the insight gained by the analysis to propose parameters that tie the statistical topological measures to economic factors that might influence the attractiveness to the end users in participating in such a decentralized energy market, thus identifying what are the important topological parameters to work on to facilitate such open decentralized markets.
|
Albert @cite_43 study the reliability aspects of the United States Power Grid. In extreme summary this work is particularly relevant for the big sample it analyses representing the whole North American Power Grid and focusing in illustrating the cascading effects evaluating a connectivity loss property the authors define. The effects affecting the based on this metric show important differences between the various policies of node removal (random, node degree or betweenness based). Betweenness computation is remarkable as it is used to identify the important nodes in the network, however the article does not take into account any sort of weight to be associated to the power lines.
|
{
"cite_N": [
"@cite_43"
],
"mid": [
"1968686485"
],
"abstract": [
"The magnitude of the August 2003 blackout affecting the United States has put the challenges of energy transmission and distribution into limelight. Despite all the interest and concerted effort, the complexity and interconnectivity of the electric infrastructure precluded us for a long time from understanding why certain events happened. In this paper we study the power grid from a network perspective and determine its ability to transfer power between generators and consumers when certain nodes are disrupted. We find that the power grid is robust to most perturbations, yet disturbances affecting key transmision substations greatly reduce its ability to function. We emphasize that the global properties of the underlying network must be understood as they greatly affect local behavior."
]
}
|
1101.1118
|
1802775179
|
The traditional Power Grid has been designed in a hierarchical fashion, with Energy pushed from the large scale production facilities towards the end users. But with the increasing availability of micro and medium scale generating facilities, the situation is changing. Many end users can now produce energy and share it over the Power Grid. Naturally, end users need to have incentives to do so and might want to be able to act in an open decentralized energy market. In the present work, we offer a novel analysis of the Medium and Low Voltage Power Grids of the North Netherlands using statistical tools from the Complex Network Analysis field. We use a weighted model based on actual Grid data and propose a set of statistical measures to evaluate the adequacy of the current infrastructure for a decentralized energy market. Further, we use the insight gained by the analysis to propose parameters that tie the statistical topological measures to economic factors that might influence the attractiveness to the end users in participating in such a decentralized energy market, thus identifying what are the important topological parameters to work on to facilitate such open decentralized markets.
|
Crucitti @cite_44 analyse the Italian from a topological perspective. This work is particularly relevant for the concept of efficiency that is used to understand the performance of the network. This metric is evaluated as a function of the tolerance of load both for edges and nodes. It is interesting that some sort of weights are used for this analysis: a capacity measure is associated with nodes while weight is associated with edges based on residual capacity of nodes. In this article too, some strategies of failure simulation are taken into account (random and betweenness-based removal). However, the size of the sample analysed is small compared to other works and the type of weight attributed to edges is not related to any physical quantity (e.g., lines resistance), but is only related to topological betweenness.
|
{
"cite_N": [
"@cite_44"
],
"mid": [
"2075542919"
],
"abstract": [
"Large-scale blackouts are an intrinsic drawback of electric power transmission grids. Here we analyze the structural vulnerability of the Italian GRTN power grid by using a model for cascading failures recently proposed in (Phys. Rev. E 69 (2004))."
]
}
|
1101.1118
|
1802775179
|
The traditional Power Grid has been designed in a hierarchical fashion, with Energy pushed from the large scale production facilities towards the end users. But with the increasing availability of micro and medium scale generating facilities, the situation is changing. Many end users can now produce energy and share it over the Power Grid. Naturally, end users need to have incentives to do so and might want to be able to act in an open decentralized energy market. In the present work, we offer a novel analysis of the Medium and Low Voltage Power Grids of the North Netherlands using statistical tools from the Complex Network Analysis field. We use a weighted model based on actual Grid data and propose a set of statistical measures to evaluate the adequacy of the current infrastructure for a decentralized energy market. Further, we use the insight gained by the analysis to propose parameters that tie the statistical topological measures to economic factors that might influence the attractiveness to the end users in participating in such a decentralized energy market, thus identifying what are the important topological parameters to work on to facilitate such open decentralized markets.
|
A different model is presented by Chassin @cite_37 where the analysis focuses on the North American Power Grid. The authors start with the hypothesis that the Grid can be modelled as a Scale-free network. This work is extremely relevant for the large size of the sample analysed (more than 300000 nodes) and the use of reliability measures that are typical of power engineering (e.g., loss of load probability) to quantify the failure characteristics of the network from a topological point of view. The similarity of the results obtained by considering reliability with authors' topological measures and other non topological studies for electrical Grids is interesting. However, a study of betweenness is unfortunately missing. Given the size of the sample (although it is not explicitly stated if components are considered or not) it would have useful to compute the betweenness of the nodes in order to understand if and how the betweenness behaves in this sort of big sample.
|
{
"cite_N": [
"@cite_37"
],
"mid": [
"1999647190"
],
"abstract": [
"The reliability of electric transmission systems is examined using a scale-free model of network topology and failure propagation. The topologies of the North American eastern and western electric grids are analyzed to estimate their reliability based on the Barabasi–Albert network model. A commonly used power system reliability index is computed using a simple failure propagation model. The results are compared to the values of power system reliability indices previously obtained using standard power engineering methods, and they suggest that scale-free network models are usable to estimate aggregate electric grid reliability."
]
}
|
1101.1118
|
1802775179
|
The traditional Power Grid has been designed in a hierarchical fashion, with Energy pushed from the large scale production facilities towards the end users. But with the increasing availability of micro and medium scale generating facilities, the situation is changing. Many end users can now produce energy and share it over the Power Grid. Naturally, end users need to have incentives to do so and might want to be able to act in an open decentralized energy market. In the present work, we offer a novel analysis of the Medium and Low Voltage Power Grids of the North Netherlands using statistical tools from the Complex Network Analysis field. We use a weighted model based on actual Grid data and propose a set of statistical measures to evaluate the adequacy of the current infrastructure for a decentralized energy market. Further, we use the insight gained by the analysis to propose parameters that tie the statistical topological measures to economic factors that might influence the attractiveness to the end users in participating in such a decentralized energy market, thus identifying what are the important topological parameters to work on to facilitate such open decentralized markets.
|
Holmgren @cite_17 analyses the Nordic involving the Grids of Sweden, Finland, Norway, and a great part of Denmark comparing these with the U.S. Power Grid. A resilience analysis is performed and with the inclusion in this work of some fictitious scenarios of failure of the Grid and the possible adoptable solutions together with their resulting benefits. A computation of the betweenness property of the graph might have been useful to understand the differences between the different samples; also a weighted graph study might have pointed out even more interesting aspects between the various networks, but these are missing.
|
{
"cite_N": [
"@cite_17"
],
"mid": [
"2025761485"
],
"abstract": [
"In this article, we model electric power delivery networks as graphs, and conduct studies of two power transmission grids, i.e., the Nordic and the western states (U.S.) transmission grid. We calculate values of topological (structural) characteristics of the networks and compare their error and attack tolerance (structural vulnerability), i.e., their performance when vertices are removed, with two frequently used theoretical reference networks (the Erdos-Renyi random graph and the Barabasi-Albert scale-free network). Further, we perform a structural vulnerability analysis of a fictitious electric power network with simple structure. In this analysis, different strategies to decrease the vulnerability of the system are evaluated. Finally, we present a discussion on the practical applicability of graph modeling."
]
}
|
1101.1118
|
1802775179
|
The traditional Power Grid has been designed in a hierarchical fashion, with Energy pushed from the large scale production facilities towards the end users. But with the increasing availability of micro and medium scale generating facilities, the situation is changing. Many end users can now produce energy and share it over the Power Grid. Naturally, end users need to have incentives to do so and might want to be able to act in an open decentralized energy market. In the present work, we offer a novel analysis of the Medium and Low Voltage Power Grids of the North Netherlands using statistical tools from the Complex Network Analysis field. We use a weighted model based on actual Grid data and propose a set of statistical measures to evaluate the adequacy of the current infrastructure for a decentralized energy market. Further, we use the insight gained by the analysis to propose parameters that tie the statistical topological measures to economic factors that might influence the attractiveness to the end users in participating in such a decentralized energy market, thus identifying what are the important topological parameters to work on to facilitate such open decentralized markets.
|
Casals @cite_15 analyse the whole European and try to extract non-topological reliability measures investigating the topological properties of the network. The analysed is the end composed of almost 2800 nodes that span across the European continent. The assumption is that node degree distribution follows an exponential decay for every single network composing the European network, each one having a characteristic parameter that is related to the of the specific Grid. Although this study is based on relevant samples, but half of the considered Grids are small both in size and order (below 100 nodes). The most interesting aspect is the use of new indicators to assess network reliability, but, as the same authors explicitly state, these metrics need more test and a deeper study. As remarked for other works, there is no mention about using weights to characterize the edges in the networks.
|
{
"cite_N": [
"@cite_15"
],
"mid": [
"2002525197"
],
"abstract": [
"Publicat originalment a: WIT transactions on ecology and the environment, 2009, vol. 121, p. 527-537"
]
}
|
1101.1118
|
1802775179
|
The traditional Power Grid has been designed in a hierarchical fashion, with Energy pushed from the large scale production facilities towards the end users. But with the increasing availability of micro and medium scale generating facilities, the situation is changing. Many end users can now produce energy and share it over the Power Grid. Naturally, end users need to have incentives to do so and might want to be able to act in an open decentralized energy market. In the present work, we offer a novel analysis of the Medium and Low Voltage Power Grids of the North Netherlands using statistical tools from the Complex Network Analysis field. We use a weighted model based on actual Grid data and propose a set of statistical measures to evaluate the adequacy of the current infrastructure for a decentralized energy market. Further, we use the insight gained by the analysis to propose parameters that tie the statistical topological measures to economic factors that might influence the attractiveness to the end users in participating in such a decentralized energy market, thus identifying what are the important topological parameters to work on to facilitate such open decentralized markets.
|
Casals @cite_39 consider the Grids of many European countries analyzing them together and as separate entities. This work has an overall relevant sample although no information are given for each single Grid which might have smaller significance when analysed alone. The most interesting aspects are the evaluation of Small-world properties for the networks composing the European Grid and the resilience test with both random and node degree-based removal strategies. There is no mention about using weights to characterize the edges in the network and no betweenness computation to find out critical nodes in term of paths covered.
|
{
"cite_N": [
"@cite_39"
],
"mid": [
"2155202145"
],
"abstract": [
"We present an analysis of the topological structure and static tolerance to errors and attacks of the September 2003 actualization of the Union for the Coordination of Transport of Electricity (UCTE) power grid, involving thirty-three different networks. Though every power grid studied has exponential degree distribution and most of them lack typical small-world topology, they display patterns of reaction to node loss similar to those observed in scale-free networks. We have found that the node removal behavior can be logarithmically related to the power grid size. This logarithmic behavior would suggest that, though size favors fragility, growth can reduce it. We conclude that, with the ever-growing demand for power and reliability, actual planning strategies to increase transmission systems would have to take into account this relative increase in vulnerability with size, in order to facilitate and improve the power grid design and functioning."
]
}
|
1101.1118
|
1802775179
|
The traditional Power Grid has been designed in a hierarchical fashion, with Energy pushed from the large scale production facilities towards the end users. But with the increasing availability of micro and medium scale generating facilities, the situation is changing. Many end users can now produce energy and share it over the Power Grid. Naturally, end users need to have incentives to do so and might want to be able to act in an open decentralized energy market. In the present work, we offer a novel analysis of the Medium and Low Voltage Power Grids of the North Netherlands using statistical tools from the Complex Network Analysis field. We use a weighted model based on actual Grid data and propose a set of statistical measures to evaluate the adequacy of the current infrastructure for a decentralized energy market. Further, we use the insight gained by the analysis to propose parameters that tie the statistical topological measures to economic factors that might influence the attractiveness to the end users in participating in such a decentralized energy market, thus identifying what are the important topological parameters to work on to facilitate such open decentralized markets.
|
Sole @cite_13 go further in exploring the same data analysed in @cite_39 , in particular they focus on targeted attacks. The model is based on the assumption, also verified by empirical data, that there is no correlation between nodes having a certain degree to be connected to each other. This work has an overall relevant sample and it focuses on simulating different failure events (random or targeted), establishing an interesting correlation between topological and non-topological reliability studies. The major point of improvement is related to the small size of half of the samples used (below 100 in order) and the possibility of introducing weights for edges related to some of the physical properties. In addition, an evaluation of the betweenness is missing in order to understand if other nodes appear to be critical and so to be targeted using this different removal metric.
|
{
"cite_N": [
"@cite_13",
"@cite_39"
],
"mid": [
"2112920805",
"2155202145"
],
"abstract": [
"The power grid defines one of the most important technological networks of our times and sustains our complex society. It has evolved for more than a century into an extremely huge and seemingly robust and well understood system. But it becomes extremely fragile as well, when unexpected, usually minimal, failures turn into unknown dynamical behaviours leading, for example, to sudden and massive blackouts. Here we explore the fragility of the European power grid under the effect of selective node removal. A mean field analysis of fragility against attacks is presented together with the observed patterns. Deviations from the theoretical conditions for network percolation (and fragmentation) under attacks are analysed and correlated with non topological reliability measures.",
"We present an analysis of the topological structure and static tolerance to errors and attacks of the September 2003 actualization of the Union for the Coordination of Transport of Electricity (UCTE) power grid, involving thirty-three different networks. Though every power grid studied has exponential degree distribution and most of them lack typical small-world topology, they display patterns of reaction to node loss similar to those observed in scale-free networks. We have found that the node removal behavior can be logarithmically related to the power grid size. This logarithmic behavior would suggest that, though size favors fragility, growth can reduce it. We conclude that, with the ever-growing demand for power and reliability, actual planning strategies to increase transmission systems would have to take into account this relative increase in vulnerability with size, in order to facilitate and improve the power grid design and functioning."
]
}
|
1101.1118
|
1802775179
|
The traditional Power Grid has been designed in a hierarchical fashion, with Energy pushed from the large scale production facilities towards the end users. But with the increasing availability of micro and medium scale generating facilities, the situation is changing. Many end users can now produce energy and share it over the Power Grid. Naturally, end users need to have incentives to do so and might want to be able to act in an open decentralized energy market. In the present work, we offer a novel analysis of the Medium and Low Voltage Power Grids of the North Netherlands using statistical tools from the Complex Network Analysis field. We use a weighted model based on actual Grid data and propose a set of statistical measures to evaluate the adequacy of the current infrastructure for a decentralized energy market. Further, we use the insight gained by the analysis to propose parameters that tie the statistical topological measures to economic factors that might influence the attractiveness to the end users in participating in such a decentralized energy market, thus identifying what are the important topological parameters to work on to facilitate such open decentralized markets.
|
Crucitti @cite_33 analyse the of Italy, France and Spain to detect the most critical lines and propose solutions to address their vulnerabilities. This work has some very valuable aspects such as the comparison of the Grids of three different countries (i.e., Italy, France and Spain), the identification of the most vulnerable edges, the damage provoked by an attack and possible improvements based on the efficiency metric. The sample used is considerably small; in addition there is no use of weights to characterize the edges. Thus it is not possible to discover which is the weight of the most critical edges identified and if there is a correlation with the unweighted analysis.
|
{
"cite_N": [
"@cite_33"
],
"mid": [
"2094343838"
],
"abstract": [
"Electrical power grids are among the infrastructures that are attracting a great deal of attention because of their intrinsic criticality. Here we analyze the topological vulnerability and improvability of the spanish 400 kV, the french 400 kV and the italian 380 kV power transmission grids. For each network we detect the most critical lines and suggest how to improve the connectivity."
]
}
|
1101.1118
|
1802775179
|
The traditional Power Grid has been designed in a hierarchical fashion, with Energy pushed from the large scale production facilities towards the end users. But with the increasing availability of micro and medium scale generating facilities, the situation is changing. Many end users can now produce energy and share it over the Power Grid. Naturally, end users need to have incentives to do so and might want to be able to act in an open decentralized energy market. In the present work, we offer a novel analysis of the Medium and Low Voltage Power Grids of the North Netherlands using statistical tools from the Complex Network Analysis field. We use a weighted model based on actual Grid data and propose a set of statistical measures to evaluate the adequacy of the current infrastructure for a decentralized energy market. Further, we use the insight gained by the analysis to propose parameters that tie the statistical topological measures to economic factors that might influence the attractiveness to the end users in participating in such a decentralized energy market, thus identifying what are the important topological parameters to work on to facilitate such open decentralized markets.
|
Rosato @cite_26 analyse the same network samples studied in @cite_33 to investigate the main topological properties of these Grids (i.e., Italian, French and Spanish Grids). The contributions of this work include the comparison of the Grid of three different countries, the identification of the most vulnerable edges, and the damage of an attack and achievable improvements based on adding edges. It also studies the node degree distribution and the shortest path length distribution for these samples. It is interesting to note how the authors clearly show the correlation between country geography and topological measures. The sample used is notably small; in addition there is no use of weights to characterize the edges.
|
{
"cite_N": [
"@cite_26",
"@cite_33"
],
"mid": [
"2135031396",
"2094343838"
],
"abstract": [
"The topological properties of high-voltage electrical power transmission networks in several UE countries (the Italian 380 kV, the French 400 kV and the Spanish 400 kV networks) have been studied from available data. An assessment of the vulnerability of the networks has been given by measuring the level of damage introduced by a controlled removal of links. Topological studies could be useful to make vulnerability assessment and to design specific action to reduce topological weaknesses.",
"Electrical power grids are among the infrastructures that are attracting a great deal of attention because of their intrinsic criticality. Here we analyze the topological vulnerability and improvability of the spanish 400 kV, the french 400 kV and the italian 380 kV power transmission grids. For each network we detect the most critical lines and suggest how to improve the connectivity."
]
}
|
1101.1118
|
1802775179
|
The traditional Power Grid has been designed in a hierarchical fashion, with Energy pushed from the large scale production facilities towards the end users. But with the increasing availability of micro and medium scale generating facilities, the situation is changing. Many end users can now produce energy and share it over the Power Grid. Naturally, end users need to have incentives to do so and might want to be able to act in an open decentralized energy market. In the present work, we offer a novel analysis of the Medium and Low Voltage Power Grids of the North Netherlands using statistical tools from the Complex Network Analysis field. We use a weighted model based on actual Grid data and propose a set of statistical measures to evaluate the adequacy of the current infrastructure for a decentralized energy market. Further, we use the insight gained by the analysis to propose parameters that tie the statistical topological measures to economic factors that might influence the attractiveness to the end users in participating in such a decentralized energy market, thus identifying what are the important topological parameters to work on to facilitate such open decentralized markets.
|
Watts @cite_7 dedicates a subsection of his book to exploring the properties of the Western States Power Grid. The study gives motivations to the Small-world modeling. The analysis focuses on specific metrics such as network contraction parameters and the comparison between different models (i.e., relational and dimensional models). Therefore being Small-world the focus of the analysis, other typical Complex Network Analysis measuring are not performed (e.g., node degree distribution, betweenness distribution).
|
{
"cite_N": [
"@cite_7"
],
"mid": [
"2019789576"
],
"abstract": [
"Everyone knows the small-world phenomenon: soon after meeting a stranger, we are surprised to discover that we have a mutual friend, or we are connected through a short chain of acquaintances. In his book, Duncan Watts uses this intriguing phenomenon--colloquially called \"six degrees of separation\"--as a prelude to a more general exploration: under what conditions can a small world arise in any kind of network?The networks of this story are everywhere: the brain is a network of neurons; organisations are people networks; the global economy is a network of national economies, which are networks of markets, which are in turn networks of interacting producers and consumers. Food webs, ecosystems, and the Internet can all be represented as networks, as can strategies for solving a problem, topics in a conversation, and even words in a language. Many of these networks, the author claims, will turn out to be small worlds.How do such networks matter? Simply put, local actions can have global consequences, and the relationship between local and global dynamics depends critically on the network's structure. Watts illustrates the subtleties of this relationship using a variety of simple models---the spread of infectious disease through a structured population; the evolution of cooperation in game theory; the computational capacity of cellular automata; and the sychronisation of coupled phase-oscillators.Watts's novel approach is relevant to many problems that deal with network connectivity and complex systems' behaviour in general: How do diseases (or rumours) spread through social networks? How does cooperation evolve in large groups? How do cascading failures propagate through large power grids, or financial systems? What is the most efficient architecture for an organisation, or for a communications network? This fascinating exploration will be fruitful in a remarkable variety of fields, including physics and mathematics, as well as sociology, economics, and biology."
]
}
|
1101.1118
|
1802775179
|
The traditional Power Grid has been designed in a hierarchical fashion, with Energy pushed from the large scale production facilities towards the end users. But with the increasing availability of micro and medium scale generating facilities, the situation is changing. Many end users can now produce energy and share it over the Power Grid. Naturally, end users need to have incentives to do so and might want to be able to act in an open decentralized energy market. In the present work, we offer a novel analysis of the Medium and Low Voltage Power Grids of the North Netherlands using statistical tools from the Complex Network Analysis field. We use a weighted model based on actual Grid data and propose a set of statistical measures to evaluate the adequacy of the current infrastructure for a decentralized energy market. Further, we use the insight gained by the analysis to propose parameters that tie the statistical topological measures to economic factors that might influence the attractiveness to the end users in participating in such a decentralized energy market, thus identifying what are the important topological parameters to work on to facilitate such open decentralized markets.
|
Wang @cite_34 study the to understand the kind of communication system needed to support the decentralized control required by the Smart Grid. The analysis is based both on real samples and synthetic reference models belonging to the IEEE literature. This work has some very valuable aspects such as the investigation of a significant sample, the individuation of a new model for the node degree probability distribution, and the investigation of the physical impedance distribution of the Grid samples. All these factors bring to the development of a new model to characterize the Power Grid. An aspect that might have been analytically evaluated is the path length in the various samples analysed and the betweenness computation to characterize even those distribution analytically. The use of electrical properties is extremely interesting, however the analysis performed is dissociated to the physical graph properties therefore not considering a weighted graph structure.
|
{
"cite_N": [
"@cite_34"
],
"mid": [
"2128600589"
],
"abstract": [
"In order to design an efficient communication scheme and examine the efficiency of any networked control architecture in smart grid applications, we need to characterize statistically its information source, namely the power grid itself. Investigating the statistical properties of power grids has the immediate benefit of providing a natural simulation platform, producing a large number of power grid test cases with realistic topologies, with scalable network size, and with realistic electrical parameter settings. The second benefit is that one can start analyzing the performance of decentralized control algorithms over information networks whose topology matches that of the underlying power network and use network scientific approaches to determine analytically if these architectures would scale well. With these motivations, in this paper we study both the topological and electrical characteristics of power grid networks based on a number of synthetic and real-world power systems. The most interesting discoveries include: the power grid is sparsely connected with obvious small-world properties; its nodal degree distribution can be well fitted by a mixture distribution coming from the sum of a truncated geometric random variable and an irregular discrete random variable; the power grid has very distinctive graph spectral density and its algebraic connectivity scales as a power function of the network size; the line impedance has a heavy-tailed distribution, which can be captured quite accurately by a clipped double Pareto lognormal distribution. Based on the discoveries mentioned above, we propose an algorithm that generates random topology power grids featuring the same topology and electrical characteristics found from the real data."
]
}
|
1101.1118
|
1802775179
|
The traditional Power Grid has been designed in a hierarchical fashion, with Energy pushed from the large scale production facilities towards the end users. But with the increasing availability of micro and medium scale generating facilities, the situation is changing. Many end users can now produce energy and share it over the Power Grid. Naturally, end users need to have incentives to do so and might want to be able to act in an open decentralized energy market. In the present work, we offer a novel analysis of the Medium and Low Voltage Power Grids of the North Netherlands using statistical tools from the Complex Network Analysis field. We use a weighted model based on actual Grid data and propose a set of statistical measures to evaluate the adequacy of the current infrastructure for a decentralized energy market. Further, we use the insight gained by the analysis to propose parameters that tie the statistical topological measures to economic factors that might influence the attractiveness to the end users in participating in such a decentralized energy market, thus identifying what are the important topological parameters to work on to facilitate such open decentralized markets.
|
There are also some brief studies related to the Power Grid that appear as examples in more general discussions about Complex Networks. In particular, Amaral al in @cite_29 show a study of the Southern California and the model following an exponential decay for node degree distribution. Watts and Strogatz in @cite_27 show the Small-world phenomenon applied to the Western States Power Grid while Newman, within a more general work @cite_45 , shows the exponential node degree distribution for the same Grid, while Barabasi @cite_46 model the as a Scale-free network characterized by a power-law node degree distribution.
|
{
"cite_N": [
"@cite_46",
"@cite_27",
"@cite_29",
"@cite_45"
],
"mid": [
"2121821841",
"2112090702",
"2104085672",
"2148606196"
],
"abstract": [
"Random networks with complex topology are common in Nature, describing systems as diverse as the world wide web or social and business networks. Recently, it has been demonstrated that most large networks for which topological information is available display scale-free features. Here we study the scaling properties of the recently introduced scale-free model, that can account for the observed power-law distribution of the connectivities. We develop a mean-field method to predict the growth dynamics of the individual vertices, and use this to calculate analytically the connectivity distribution and the scaling exponents. The mean-field method can be used to address the properties of two variants of the scale-free model, that do not display power-law scaling.",
"Networks of coupled dynamical systems have been used to model biological oscillators1,2,3,4, Josephson junction arrays5,6, excitable media7, neural networks8,9,10, spatial games11, genetic control networks12 and many other self-organizing systems. Ordinarily, the connection topology is assumed to be either completely regular or completely random. But many biological, technological and social networks lie somewhere between these two extremes. Here we explore simple models of networks that can be tuned through this middle ground: regular networks ‘rewired’ to introduce increasing amounts of disorder. We find that these systems can be highly clustered, like regular lattices, yet have small characteristic path lengths, like random graphs. We call them ‘small-world’ networks, by analogy with the small-world phenomenon13,14 (popularly known as six degrees of separation15). The neural network of the worm Caenorhabditis elegans, the power grid of the western United States, and the collaboration graph of film actors are shown to be small-world networks. Models of dynamical systems with small-world coupling display enhanced signal-propagation speed, computational power, and synchronizability. In particular, infectious diseases spread more easily in small-world networks than in regular lattices.",
"We study the statistical properties of a variety of diverse real-world networks. We present evidence of the occurrence of three classes of small-world networks: (a) scale-free networks, characterized by a vertex connectivity distribution that decays as a power law; (b) broad-scale networks, characterized by a connectivity distribution that has a power law regime followed by a sharp cutoff; and (c) single-scale networks, characterized by a connectivity distribution with a fast decaying tail. Moreover, we note for the classes of broad-scale and single-scale networks that there are constraints limiting the addition of new links. Our results suggest that the nature of such constraints may be the controlling factor for the emergence of different classes of networks.",
"Inspired by empirical studies of networked systems such as the Internet, social networks, and biological networks, researchers have in recent years developed a variety of techniques and models to help us understand or predict the behavior of these systems. Here we review developments in this field, including such concepts as the small-world effect, degree distributions, clustering, network correlations, random graph models, models of network growth and preferential attachment, and dynamical processes taking place on networks."
]
}
|
1101.1893
|
2089755546
|
Random Boolean networks (RBNs) have been a popular model of genetic regulatory networks for more than four decades. However, most RBN studies have been made with random topologies, while real regulatory networks have been found to be modular. In this work, we extend classical RBNs to define modular RBNs. Statistical experiments and analytical results show that modularity has a strong effect on the properties of RBNs. In particular, modular RBNs have more attractors, and are closer to criticality when chaotic dynamics would be expected, than classical RBNs.
|
Bastolla and Parisi @cite_3 studied modularity within classical RBNs, i.e. functionally independent clusters, but not topological modularity.
|
{
"cite_N": [
"@cite_3"
],
"mid": [
"2087471103"
],
"abstract": [
"This is the second paper of a series of two about the structural properties that influence the asymptotic dynamics of Random Boolean Networks. Here we study the functionally independent clusters in which the relevant elements, introduced and studied in our first paper [3], are subdivided. We show that the phase transition in Random Boolean Networks can also be described as a percolation transition. The statistical properties of the clusters of relevant elements (that we call modules) give an insight on the scaling behavior of the attractors of the critical networks that, according to Kauffman, have a biological analogy as a model of genetic regulatory systems."
]
}
|
1101.1893
|
2089755546
|
Random Boolean networks (RBNs) have been a popular model of genetic regulatory networks for more than four decades. However, most RBN studies have been made with random topologies, while real regulatory networks have been found to be modular. In this work, we extend classical RBNs to define modular RBNs. Statistical experiments and analytical results show that modularity has a strong effect on the properties of RBNs. In particular, modular RBNs have more attractors, and are closer to criticality when chaotic dynamics would be expected, than classical RBNs.
|
There are studies where RBNs are generated in cells of a 2D lattice, similar to a cellular automaton, where each RBN is weakly coupled with its von Neumann neighbors @cite_35 @cite_29 @cite_48 . The goal is to model intercellular signaling in a tissue.
|
{
"cite_N": [
"@cite_35",
"@cite_29",
"@cite_48"
],
"mid": [
"1493700998",
"1547600484",
"1526851954"
],
"abstract": [
"Random boolean networks (shortly, RBN) have proven useful in describing complex phenomena occurring at the unicellular level It is therefore interesting to investigate how their dynamical behavior is affected by cell-cell interactions, which mimics those occurring in tissues in multicellular organisms It has also been suggested that evolution may tend to adjust the parameters of the genetic network so that it operates close to a critical state, which should provide evolutionary advantage ; this hypothesis has received intriguing, although not definitive support from recent findings It is therefore particularly interesting to consider how the tissue-like organization alters the dynamical behavior of the networks close to a critical state In this paper we define a model tissue, which is a cellular automaton each of whose cells hosts a full RBN, and we report preliminary studies of the way in which the dynamics is affected.",
"Deciphering the influence of the interaction among the constituents of a complex system on the overall behaviour is one of the main goals of complex systems science. The model we present in this work is a 2D square cellular automaton whose of each cell is occupied by a complete random Boolean network. Random Boolean networks are a well-known simplified model of genetic regulatory networks and this model of interacting RBNs may be therefore regarded as a simplified model of a tissue or a monoclonal colony. The mechanism of cell-to-cell interaction is here simulated letting some nodes of a particular network being influenced by the state of some nodes belonging to its neighbouring cells. One possible means to investigate the overall dynamics of a complex system is studying its response to perturbations. Our analyses follow this methodological approach. Even though the dynamics of the system is far from trivial we could show in a clear way how the interaction affects the dynamics and the global degree of order.",
"Information processing and information flow occur at many levels in the course of an organism's development and throughout its lifespan. Biological networks inside cells transmit information from their inputs (e.g. the concentrations of proteins or other signaling molecules) to their outputs (e.g. the expression levels of various genes). Moreover, cells do not exist in isolation, but they constantly interact with one another. We study the information flow in a model of interacting genetic networks, which are represented as Boolean graphs. It is observed that the information transfer among the networks is not linearly dependent on the amount of nodes that are able to influence the state of genes in surrounding cells."
]
}
|
1101.0091
|
1505706396
|
We evaluate optimized parallel sparse matrix-vector operations for two representative application areas on widespread multicore-based cluster configurations. First the single-socket baseline performance is analyzed and modeled with respect to basic architectural properties of standard multicore chips. Going beyond the single node, parallel sparse matrix-vector operations often suffer from an unfavorable communication to computation ratio. Starting from the observation that nonblocking MPI is not able to hide communication cost using standard MPI implementations, we demonstrate that explicit overlap of communication and computation can be achieved by using a dedicated communication thread, which may run on a virtual core. We compare our approach to pure MPI and the widely used "vector-like'' hybrid programming strategy.
|
In recent years the performance of various spMVM algorithms has been evaluated by several groups @cite_9 @cite_10 @cite_14 . Covering different matrix storage formats and implementations on various types of hardware, they reviewed a more or less large number of publicly available matrices and reported on the obtained performance. Scalable parallel spMVM implementations have also been proposed @cite_3 @cite_6 , mostly based on an MPI-only strategy. Hybrid parallel spMVM approaches have already been devised before the emergence of multicore processors @cite_15 @cite_11 . Recently a vector mode'' approach could not compete with a scalable MPI implementation for a specific problem on a Cray system @cite_3 . There is no up-to-date literature that systematically investigates novel features like multicore, ccNUMA node structure, and simultaneous multithreading (SMT) for hybrid parallel spMVM.
|
{
"cite_N": [
"@cite_14",
"@cite_9",
"@cite_6",
"@cite_3",
"@cite_15",
"@cite_10",
"@cite_11"
],
"mid": [
"2128853364",
"1975116854",
"2062240003",
"2092013581",
"2049585661",
"2103877122",
"2522590958"
],
"abstract": [
"Sparse matrix-vector multiplication (SpMV) is of singular importance in sparse linear algebra. In contrast to the uniform regularity of dense linear algebra, sparse operations encounter a broad spectrum of matrices ranging from the regular to the highly irregular. Harnessing the tremendous potential of throughput-oriented processors for sparse operations requires that we expose substantial fine-grained parallelism and impose sufficient regularity on execution paths and memory access patterns. We explore SpMV methods that are well-suited to throughput-oriented architectures like the GPU and which exploit several common sparsity classes. The techniques we propose are efficient, successfully utilizing large percentages of peak bandwidth. Furthermore, they deliver excellent total throughput, averaging 16 GFLOP s and 10 GFLOP s in double precision for structured grid and unstructured mesh matrices, respectively, on a GeForce GTX 285. This is roughly 2.8 times the throughput previously achieved on Cell BE and more than 10 times that of a quad-core Intel Clovertown system.",
"In this paper, we revisit the performance issues of the widely used sparse matrix-vector multiplication (SpMxV) kernel on modern microarchitectures. Previous scientific work reports a number of different factors that may significantly reduce performance. However, the interaction of these factors with the underlying architectural characteristics is not clearly understood, a fact that may lead to misguided, and thus unsuccessful attempts for optimization. In order to gain an insight into the details of SpMxV performance, we conduct a suite of experiments on a rich set of matrices for three different commodity hardware platforms. In addition, we investigate the parallel version of the kernel and report on the corresponding performance results and their relation to each architecture's specific multithreaded configuration. Based on our experiments, we extract useful conclusions that can serve as guidelines for the optimization process of both single and multithreaded versions of the kernel.",
"Abstract The sparse matrix–vector product is an important computational kernel that runs ineffectively on many computers with super-scalar RISC processors. In this paper we analyse the performance of the sparse matrix–vector product with symmetric matrices originating from the FEM and describe techniques that lead to a fast implementation. It is shown how these optimisations can be incorporated into an efficient parallel implementation using message-passing. We conduct numerical experiments on many different machines and show that our optimisations speed up the sparse matrix–vector multiplication substantially.",
"We present a massively parallel implementation of symmetric sparse matrix-vector product for modern clusters with scalar multi-core CPUs. Matrices with highly variable structure and density arising from unstructured three-dimensional FEM discretizations of mechanical and diffusion problems are studied. A metric of the effective memory bandwidth is introduced to analyze the impact on performance of a set of simple, well-known optimizations: matrix reordering, manual prefetching, and blocking. A modification to the CRS storage improving the performance on multi-core Opterons is shown. The performance of an entire SMP blade rather than the per-core performance is optimized. Even for the simplest 4 node mechanical element our code utilizes close to 100 of the per-blade available memory bandwidth. We show that reducing the storage requirements for symmetric matrices results in roughly two times speedup. Blocking brings further storage savings and a proportional performance increase. Our results are compared to existing state-of-the-art implementations of SpMV, and to the dense BLAS2 performance. Parallel efficiency on 5400 Opteron cores of the Cray XT4 cluster is around 80-90 for problems with approximately 25^3 mesh nodes per core. For a problem with 820 million degrees of freedom the code runs with a sustained performance of 5.2 TeraFLOPs, over 20 of the theoretical peak.",
"Most HPC systems are clusters of shared memory nodes. Parallel programming must combine the distributed memory parallelization on the node interconnect with the shared memory parallelization inside each node. The hybrid MPI+OpenMP programming model is compared with pure MPI, compiler based parallelization, and other parallel programming models on hybrid architectures. The paper focuses on bandwidth and latency aspects, and also on whether programming paradigms can separate the optimization of communication and computation. Benchmark results are presented for hybrid and pure MPI communication. This paper analyzes the strengths and weaknesses of several parallel programming models on clusters of SMP nodes.",
"We are witnessing a dramatic change in computer architecture due to the multicore paradigm shift, as every electronic device from cell phones to supercomputers confronts parallelism of unprecedented scale. To fully unleash the potential of these systems, the HPC community must develop multicore specific-optimization methodologies for important scientific computations. In this work, we examine sparse matrix-vector multiply (SpMV) - one of the most heavily used kernels in scientific computing - across a broad spectrum of multicore designs. Our experimental platform includes the homogeneous AMD quad-core, AMD dual-core, and Intel quad-core designs, the heterogeneous STI Cell, as well as one of the first scientific studies of the highly multithreaded Sun Victoria Falls (a Niagara2 SMP). We present several optimization strategies especially effective for the multicore environment, and demonstrate significant performance improvements compared to existing state-of-the-art serial and parallel SpMV implementations. Additionally, we present key insights into the architectural trade-offs of leading multicore design strategies, in the context of demanding memory-bound numerical algorithms.",
"Eigenvalue problems involving very large sparse matrices are common to various fields in science. In general, the numerical core of iterative eigenvalue algorithms is a matrix-vector multiplication (MVM) involving the large sparse matrix. We present three different programming approaches for parallel MVM on present day supercomputers. In addition to a pure message-passing approach, two hybrid parallel implementations are introduced based on simultaneous use of message-passing and shared-memory programming models. For a modern SMP cluster (HITACHI SR8000) performance and scalability of the hybrid implementations are discussed and compared with the pure message-passing approach on massively-parallel systems (CRAY T3E), vector computers (NEC SX5e) and distributed shared-memory systems (SGI Origin3800)."
]
}
|
1101.0350
|
1584829928
|
The proliferation of peer-to-peer (P2P) file sharing protocols is due to their efficient and scalable methods for data dissemination to numerous users. But many of these networks have no provisions to provide users with long term access to files after the initial interest has diminished, nor are they able to guarantee protection for users from malicious clients that wish to implicate them in incriminating activities. As such, users may turn to supplementary measures for storing and transferring data in P2P systems. We present a new file sharing paradigm, called a Graffiti Network, which allows peers to harness the potentially unlimited storage of the Internet as a third-party intermediary. Our key contributions in this paper are (1) an overview of a distributed system based on this new threat model and (2) a measurement of its viability through a one-year deployment study using a popular web-publishing platform. The results of this experiment motivate a discussion about the challenges of mitigating this type of file sharing in a hostile network environment and how web site operators can protect their resources.
|
Much of the previous work on developing P2P storage systems that provide block storage across multiple nodes is based on distributed hash tables @cite_19 @cite_5 @cite_28 . These approaches have the same deficiencies as the BitTorrent model: peers download file blocks directly from other peers, thereby losing anonymity, and the systems do not provide mechanisms to provide long term availability for less popular files after peers disconnect from the network. Other systems are focused on providing anonymous and secure P2P data storage @cite_6 . The POTSHARDS system provides secure long-term data storage when the content originator no longer exists using secret splitting and data reconstruction techniques to handle partial losses @cite_17 ; their approach assumes multiple, semi-reliable storage backends that are willing to host a client's data. The Freenet anonymous storage system uses key-based routing to locate files stored on remote peers @cite_10 . As discussed in @cite_19 , Freenet's anonymity limits both its reliability and performance: files are not associated with any predictable server, and thus unpopular content may disappear since no one is responsible for maintaining replicas.
|
{
"cite_N": [
"@cite_28",
"@cite_6",
"@cite_19",
"@cite_5",
"@cite_10",
"@cite_17"
],
"mid": [
"",
"2163598690",
"2150676586",
"2104210894",
"1663493649",
"1566782895"
],
"abstract": [
"",
"We describe a system that we have designed and implemented for publishing content on the web. Our publishing scheme has the property that it is very difficult for any adversary to censor or modify the content. In addition, the identity of the publisher is protected once the content is posted. Our system differs from others in that we provide tools for updating or deleting the published content, and users can browse the content in the normal point and click manner using a standard web browser and a client-side proxy that we provide. All of our code is freely available.",
"The Cooperative File System (CFS) is a new peer-to-peer read-only storage system that provides provable guarantees for the efficiency, robustness, and load-balance of file storage and retrieval. CFS does this with a completely decentralized architecture that can scale to large systems. CFS servers provide a distributed hash table (DHash) for block storage. CFS clients interpret DHash blocks as a file system. DHash distributes and caches blocks at a fine granularity to achieve load balance, uses replication for robustness, and decreases latency with server selection. DHash finds blocks using the Chord location protocol, which operates in time logarithmic in the number of servers.CFS is implemented using the SFS file system toolkit and runs on Linux, OpenBSD, and FreeBSD. Experience on a globally deployed prototype shows that CFS delivers data to clients as fast as FTP. Controlled tests show that CFS is scalable: with 4,096 servers, looking up a block of data involves contacting only seven servers. The tests also demonstrate nearly perfect robustness and unimpaired performance even when as many as half the servers fail.",
"OceanStore is a utility infrastructure designed to span the globe and provide continuous access to persistent information. Since this infrastructure is comprised of untrusted servers, data is protected through redundancy and cryptographic techniques. To improve performance, data is allowed to be cached anywhere, anytime. Additionally, monitoring of usage patterns allows adaptation to regional outages and denial of service attacks; monitoring also enhances performance through pro-active movement of data. A prototype implementation is currently under development.",
"",
"Users are storing ever-increasing amounts of information digitally, driven by many factors including government regulations and the public's desire to digitally record their personal histories. Unfortunately, many of the security mechanisms that modern systems rely upon, such as encryption, are poorly suited for storing data for indefinitely long periods of time--it is very difficult to manage keys and update cryptosystems to provide secrecy through encryption over periods of decades. Worse, an adversary who can compromise an archive need only wait for cryptanalysis techniques to catch up to the encryption algorithm used at the time of the compromise in order to obtain \"secure\" data. To address these concerns, we have developed POTSHARDS, an archival storage system that provides long-term security for data with very long lifetimes without using encryption. Secrecy is achieved by using provably secure secret splitting and spreading the resulting shares across separately-managed archives. Providing availability and data recovery in such a system can be difficult; thus, we use a new technique, approximate pointers, in conjunction with secure distributed RAID techniques to provide availability and reliability across independent archives. To validate our design, we developed a prototype POTSHARDS implementation, which has demonstrated \"normal\" storage and retrieval of user data using indexes, the recovery of user data using only the pieces a user has stored across the archives and the reconstruction of an entire failed archive."
]
}
|
1101.0892
|
2950683529
|
When data productions and consumptions are heavily unbalanced and when the origins of data queries are spatially and temporally distributed, the so called in-network data storage paradigm supersedes the conventional data collection paradigm in wireless sensor networks (WSNs). In this paper, we first introduce geometric quorum systems (along with their metrics) to incarnate the idea of in-network data storage. These quorum systems are "geometric" because curves (rather than discrete node sets) are used to form quorums. We then propose GeoQuorum as a new quorum system, for which the quorum forming curves are parameterized. Though our proposal stems from the existing work on using curves to guide data replication and retrieval in dense WSNs, we significantly expand this design methodology, by endowing GeoQuorum with a great flexibility to fine-tune itself towards different application requirements. In particular, the tunability allows GeoQuorum to substantially improve the load balancing performance and to remain competitive in energy efficiency. Both our analysis and simulations confirm the performance enhancement brought by GeoQuorum.
|
Traditional quorum systems are confined in 2D space, and hence only allow for limited designs, such as the grid shown in Figure (a), or the B-Grid @cite_19 , shown in Figure (b), for improving the robustness. Similar ideas were re-introduced into (MANETs) and WSNs @cite_2 @cite_17 @cite_6 , though sometimes under different names. These designs are often so rigid that they allow very little tunability that adapts a system to various application requirements. To improve the flexibility of the quorum systems, @cite_12 were introduced to relax the intersection rule (making it a random variable) and to leave more freedom in trading load for robustness; they were later applied to MANETs to cope with node mobility @cite_16 . Interested readers are referred to @cite_14 @cite_11 @cite_22 for more recent developments in probabilistic quorum systems. In general, probabilistic quorum systems are designed to cope with system dynamics (e.g., node mobility), hence they are trading system efficiency for higher robustness. As we explained in Sec. , energy efficiency is a crucial issue in WSNs, whereas nodes in WSNs are often static. Consequently, we advocate a deterministic design for quorum systems, while relying on other techniques (rather than pure randomization) to improve its flexibility.
|
{
"cite_N": [
"@cite_14",
"@cite_11",
"@cite_22",
"@cite_6",
"@cite_19",
"@cite_2",
"@cite_16",
"@cite_12",
"@cite_17"
],
"mid": [
"2134011626",
"2121047796",
"",
"",
"2142009827",
"2166617227",
"1785573601",
"2113954151",
"2117339995"
],
"abstract": [
"Reliable storage of data with concurrent read write accesses (or query update) is an ever recurring issue in distributed settings. In mobile ad hoc networks, the problem becomes even more challenging due to highly dynamic and unpredictable topology changes. It is precisely this unpredictability that makes probabilistic protocols very appealing for such environments. Inspired by the principles of probabilistic quorum systems, we present a Probabilistic quorum system for ad hoc networks Pan), a collection of protocols for the reliable storage of data in mobile ad hoc networks. Our system behaves in a predictable way due to the gossip-based diffusion mechanism applied for quorum accesses, and the protocol overhead is reduced by adopting an asymmetric quorum construction. We present an analysis of our Pan system, in terms of both reliability and overhead, which can be used to fine tune protocol parameters to obtain the desired tradeoff between efficiency and fault tolerance. We confirm the predictability and tunability of Pan through simulations with ns-2.",
"Providing reliable group communication is an ever recurring topic in distributed settings. In mobile ad hoc networks, this problem is even more significant since all nodes act as peers, while it becomes more challenging due to highly dynamic and unpredictable topology changes. In order to overcome these difficulties, we deviate from the conventional point of view, i.e., we \"fight fire with fire,\" by exploiting the nondeterministic nature of ad hoc networks. Inspired by the principles of gossip mechanisms and probabilistic quorum systems, we present in this paper PILOT (probabilistic lightweight group communication system) for ad hoc networks, a two-layer system consisting of a set of protocols for reliable multicasting and data sharing in mobile ad hoc networks. The performance of PILOT is predictable and controllable in terms of both reliability (fault tolerance) and efficiency (overhead). We present an analysis of PILOT's performance, which is used to fine-tune protocol parameters to obtain the desired trade off between reliability and efficiency. We confirm the predictability and tunability of PILOT through simulations with ns-2.",
"",
"",
"A quorum system is a collection of sets (quorums) every two of which intersect. Quorum systems have been used for many applications in the area of distributed systems, including mutual exclusion, data replication, and dissemination of information. Given a strategy to pick quorums, the load LS is the minimal access probability of the busiest element, minimizing over the strategies. The capacity is the highest quorum accesses rate that cS can handle, so @math . The availability of a quorum system cS is the probability that at least one quorum survives, assuming that each element fails independently with probability p. A tradeoff between LS and the availability of cS is shown. We present four novel constructions of quorum systems, all featuring optimal or near optimal load, and high availability. The best construction, based on paths in a grid, has a load of @math , and a failure probability of @math when the elements fail with probability @math . Moreover, even in the presence of faults, with exponentially high probability the load of this system is still @math . The analysis of this scheme is based on percolation theory.",
"A distributed mobility management scheme using a class of uniform quorum systems (UQS) is proposed for ad hoc networks. In the proposed scheme, location databases are stored in the network nodes themselves, which form a self-organizing virtual backbone within the flat network structure. The databases are dynamically organized into quorums, every two of which intersect at a constant number of databases. Upon location update or call arrival, a mobile's location information is written to or read from all the databases of a quorum, chosen in a nondeterministic manner. Compared with a conventional scheme [such as the use of home location register (HLR)] with fixed associations, this scheme is more suitable for ad hoc networks, where the connectivity of the nodes with the rest of the network can be intermittent and sporadic and the databases are relatively unstable. We introduce UQS, where the size of the quorum intersection is a design parameter that can be tuned to adapt to the traffic and mobility patterns of the network nodes. We propose the construction of UQS through the balanced incomplete block designs. The average cost, due to call loss and location updates using such systems, is analyzed in the presence of database disconnections. Based on the average cost, we investigate the tradeoff between the system reliability and the cost of location updates in the UQS scheme. The problem of optimizing the quorum size under different network traffic and mobility patterns is treated numerically. A dynamic and distributed HLR scheme, as a limiting case of the UQS, is also analyzed and shown to be suboptimal in general. It is also shown that partitioning of the network is sometimes necessary to reduce the cost of mobility management.",
"A distributed mobility management scheme using randomized database groups (RDG) is proposed and analyzed for ad-hoc networks. In the proposed scheme, location databases are stored in the network nodes, comprising a virtual backbone within the flat network architecture. Upon location update or call arrival, a mobile's location information is written to or read from, respectively, a group of randomly chosen databases. Compared with a centralized scheme (such as the home location register) with fixed associations, this scheme is more suitable for ad-hoc networks, where the connectivity of the nodes with the rest of the network can be intermittent and sporadic, and the databases are relatively unstable. The expected cost due to call loss and location updates using this scheme is analyzed in the presence of database disconnections. Based on the expected cost, we present the numerical determination and approximation of the optimal total location database number, the optimal database access group size, and the optimal location update frequency, under different network stability, traffic, and mobility conditions. Numerical results show that the RDG scheme provides an robust and efficient approach to ad-hoc mobility management.",
"We initiate the study of probabilistic quorum systems, a technique for providing consistency of replicated data with high levels of assurance despite the failure of data servers. We show that this technique offers effective load reduction on servers and high availability. We explore probabilistic quorum systems both for services tolerant of benign server failures and for services tolerant of arbitrary (Byzantine) ones. We also prove bounds on the server load that can be achieved with these techniques.",
"Sink mobility brings new challenges to large-scale sensor networking. It suggests that information about each mobile sink's location be continuously propagated through the sensor field to keep all sensor nodes updated with the direction of forwarding future data reports. Unfortunately frequent location updates from multiple sinks can lead to both excessive drain of sensors' limited battery power supply and increased collisions in wireless transmissions. In this paper we describe TTDD, a Two-Tier Data Dissemination approach that provides scalable and efficient data delivery to multiple mobile sinks. Each data source in TTDD proactively builds a grid structure which enables mobile sinks to continuously receive data on the move by flooding queries within a local cell only. TTDD's design exploits the fact that sensor nodes are stationary and location-aware to construct and maintain the grid structures with low overhead. We have evaluated TTDD performance through both analysis and extensive simulation experiments. Our results show that TTDD handles multiple mobile sinks efficiently with performance comparable with that of stationary sinks."
]
}
|
1012.5723
|
2952860919
|
Connectivity and capacity are two fundamental properties of wireless multi-hop networks. The scalability of these properties has been a primary concern for which asymptotic analysis is a useful tool. Three related but logically distinct network models are often considered in asymptotic analyses, viz. the dense network model, the extended network model and the infinite network model, which consider respectively a network deployed in a fixed finite area with a sufficiently large node density, a network deployed in a sufficiently large area with a fixed node density, and a network deployed in @math with a sufficiently large node density. The infinite network model originated from continuum percolation theory and asymptotic results obtained from the infinite network model have often been applied to the dense and extended networks. In this paper, through two case studies related to network connectivity on the expected number of isolated nodes and on the vanishing of components of finite order k>1 respectively, we demonstrate some subtle but important differences between the infinite network model and the dense and extended network models. Therefore extra scrutiny has to be used in order for the results obtained from the infinite network model to be applicable to the dense and extended network models. Asymptotic results are also obtained on the expected number of isolated nodes, the vanishingly small impact of the boundary effect on the number of isolated nodes and the vanishing of components of finite order k>1 in the dense and extended network models using a generic random connection model.
|
Extensive research has been done on connectivity problems using the well-known random geometric graph and the unit disk connection model, which is usually obtained by randomly and uniformly distributing @math vertices in a given area and connecting any two vertices iff their distance is smaller than or equal to a given threshold @math @cite_29 @cite_6 . Significant outcomes have been obtained @cite_15 @cite_28 @cite_0 @cite_26 @cite_18 @cite_23 @cite_6 @cite_1 .
|
{
"cite_N": [
"@cite_18",
"@cite_26",
"@cite_28",
"@cite_29",
"@cite_1",
"@cite_6",
"@cite_0",
"@cite_23",
"@cite_15"
],
"mid": [
"",
"2167140162",
"1967533244",
"2108986762",
"2012430523",
"",
"2039302875",
"2116859296",
""
],
"abstract": [
"",
"We analyze various critical transmitting sensing ranges for connectivity and coverage in three-dimensional sensor networks. As in other large-scale complex systems, many global parameters of sensor networks undergo phase transitions. For a given property of the network, there is a critical threshold, corresponding to the minimum amount of the communication effort or power expenditure by individual nodes, above (respectively, below) which the property exists with high (respectively, a low) probability. For sensor networks, properties of interest include simple and multiple degrees of connectivity coverage. First, we investigate the network topology according to the region of deployment, the number of deployed sensors, and their transmitting sensing ranges. More specifically, we consider the following problems: assume that n nodes, each capable of sensing events within a radius of r, are randomly and uniformly distributed in a 3-dimensional region R of volume V, how large must the sensing range R sub SENSE be to ensure a given degree of coverage of the region to monitor? For a given transmission range R sub TRANS , what is the minimum (respectively, maximum) degree of the network? What is then the typical hop diameter of the underlying network? Next, we show how these results affect algorithmic aspects of the network by designing specific distributed protocols for sensor networks.",
"Unlike wired networks, wireless networks do not come with links. Rather, links have to be fashioned out of the ether by nodes choosing neighbors to connect to. Moreover the location of the nodes may be random.The question that we resolve is: How many neighbors should each node be connected to in order that the overall network is connected in a multi-hop fashion? We show that in a network with n randomly placed nodes, each node should be connected to Θ(log n) nearest neighbors. If each node is connected to less than 0.074 log n nearest neighbors then the network is asymptotically disconnected with probability one as n increases, while if each node is connected to more than 5.1774 log n nearest neighbors then the network is asymptotically connected with probability approaching one as n increases. It appears that the critical constant may be close to one, but that remains an open problem.These results should be contrasted with some works in the 1970s and 1980s which suggested that the \"magic number\" of nearest neighbors should be six or eight.",
"For n points uniformly randomly distributed on the unit cube in d dimensions, with d≥2, let ρn (respectively, σn) denote the minimum r at which the graph, obtained by adding an edge between each pair of points distant at most r apart, is k-connected (respectively, has minimum degree k). Then P[ρn=σn]1 as n∞. ©1999 John Wiley & Sons, Inc. Random Struct. Alg., 15, 145–164, 1999",
"Let P be a Poisson process of intensity one in a square Sn of area n. For a fixed integer k, join every point of P to its k nearest neighbours, creating an undirected random geometric graph Gn,k. We prove that there exists a critical constant ccrit such that for c ccrit, Gn,⌊clog n⌋ is connected with probability tending to 1 as n → ∞. This answers a question posed by the authors in [1]. Let P be a Poisson process of intensity one in a square Sn of area n. For a fixed integer k, we join every point of P to its k nearest neighbours, creating an undirected random geometric graph GSn,k = Gn,k in which every vertex has degree at least k. The connectivity of these graphs was studied by the present authors in [1]. It is not hard to see that Gn,k becomes connected around k = �(log n), and we proved in [1] that if k(n) ≤ 0.3043log n then the probability that Gn,k(n) is connected tends to zero as n → ∞, while if k(n) ≥ 0.5139log n then the probability that Gn,k(n) is connected tends to one as n → ∞. However, we were unable to prove the natural conjecture that there exists a critical constant ccrit such that for c ccrit, P(Gn,⌊clog n⌋ is connected) → 1 as n → ∞. In this paper we prove this conjecture.",
"",
"A model of a packet radio network in which transmitters with range R are distributed according to a two-dimensional Poisson point process with density D is examined. To ensure network connectivity, it is shown that pi R sup 2 D, the expected number of nearest neighbors of a transmitter, must grow logarithmically with the area of the network. For an infinite area there exists an infinite connected component with nonzero probability if pi R sup 2 D>N sub 0 , for some critical value N sub 0 . It is shown that 2.195 >",
"A range assignment to the nodes in a wireless ad hoc network induces a topology in which there is an edge between two nodes if and only if both of them are within each other's transmission range. The critical transmission radius for k-connectivity is the smallest r such that if all nodes have the transmission radius r,the induce topology is k-connected. The critical neighbor number for k-connectivity is the smallest integer l such that if every node sets its transmission radius equal to the distance between itself an its l-th nearest neighbor, the induce topology is k-connecte. In this paper, we study the asymptotic critical transmission radius for k-connectivity an asymptotic critical neighbor number for k-connectivity in a wireless ad hoc network whose nodes are uniformly an independently distribute in a unit-area square or disk. We provide a precise asymptotic distribution of the critical transmission radius for k-connectivity and an improve asymptotic almost sure upper bound on the critical neighbor number for k-connectivity.",
""
]
}
|
1012.5723
|
2952860919
|
Connectivity and capacity are two fundamental properties of wireless multi-hop networks. The scalability of these properties has been a primary concern for which asymptotic analysis is a useful tool. Three related but logically distinct network models are often considered in asymptotic analyses, viz. the dense network model, the extended network model and the infinite network model, which consider respectively a network deployed in a fixed finite area with a sufficiently large node density, a network deployed in a sufficiently large area with a fixed node density, and a network deployed in @math with a sufficiently large node density. The infinite network model originated from continuum percolation theory and asymptotic results obtained from the infinite network model have often been applied to the dense and extended networks. In this paper, through two case studies related to network connectivity on the expected number of isolated nodes and on the vanishing of components of finite order k>1 respectively, we demonstrate some subtle but important differences between the infinite network model and the dense and extended network models. Therefore extra scrutiny has to be used in order for the results obtained from the infinite network model to be applicable to the dense and extended network models. Asymptotic results are also obtained on the expected number of isolated nodes, the vanishingly small impact of the boundary effect on the number of isolated nodes and the vanishing of components of finite order k>1 in the dense and extended network models using a generic random connection model.
|
Other work in the area include @cite_31 @cite_27 @cite_24 @cite_7 , which studies from the percolation perspective, the impact of mutual interference caused by simultaneous transmissions, the impact of physical layer cooperative transmissions, the impact of directional antennas and the impact of unreliable links on connectivity respectively.
|
{
"cite_N": [
"@cite_24",
"@cite_27",
"@cite_31",
"@cite_7"
],
"mid": [
"2120013967",
"2099979494",
"",
"2161575750"
],
"abstract": [
"Connectivity is a crucial issue in wireless ad hoc networks (WANETs). Gupta and Kumar have shown that in WANETs using omnidirectional antennas, the critical transmission range to achieve asymptotic connectivity is O(radic(log n n)) if n nodes are uniformly and independently distributed in a disk of unit area. In this paper, we investigate the connectivity problem when directional antennas are used. We first assume that each node in the network randomly beam forms in one beam direction. We find that there also exists a critical transmission range for a WANET to achieve asymptotic connectivity, which corresponds to a critical transmission power (CTP). Since CTP is dependent on the directional antenna pattern, the number of beams, and the propagation environment, we then formulate a non-linear programming problem to minimize the CTP. We show that when directional antennas use the optimal antenna pattern, the CTP in a WANET using directional antennas at both transmitter and receiver is smaller than that when either transmitter or receiver uses directional antenna and is further smaller than that when only omnidirectional antennas are used. Moreover, we revisit the connectivity problem assuming that two neighboring nodes using directional antennas can be guaranteed to beam form to each other to carry out the transmission. A smaller critical transmission range than that in the previous case is found, which implies smaller CTP.",
"Extensive research has demonstrated the potential improvement in physical layer performance when multiple radios transmit concurrently in the same radio channel. We consider how such cooperation affects the requirements for full connectivity and percolation in large wireless ad hoc networks. Both noncoherent and coherent cooperative transmission are considered. For one-dimensional (1-D) extended networks, in contrast to noncooperative networks, for any path loss exponent less than or equal to one, full connectivity occurs under the noncoherent cooperation model with probability one for any node density. Conversely, there is no full connectivity with probability one when the path loss exponent exceeds one, and the network does not percolate for any node density if the path loss exponent exceeds two. In two-dimensional (2-D) extended networks with noncoherent cooperation, for any path loss exponent less than or equal to two, full connectivity is achieved for any node density. Conversely, there is no full connectivity when the path loss exponent exceeds two, but the cooperative network percolates for node densities above a threshold which is strictly less than that of the noncooperative network. A less conclusive set of results is presented for the coherent case. Hence, even relatively simple noncoherent cooperation improves the connectivity of large ad hoc networks.",
"",
"We study connectivity and transmission latency in wireless networks with unreliable links from a percolation-based perspective. We first examine static models, where each link of the network is functional (active) with some probability, independently of all other links, where the probability may depend on the distance between the two nodes. We obtain analytical upper and lower bounds on the critical density for phase transition in this model. We then examine dynamic models, where each link is active or inactive according to a Markov on- off process. We show that a phase transition also exists in such dynamic networks, and the critical density for this model is the same as the one for static networks under some mild conditions. Furthermore, due to the dynamic behavior of links, a delay is incurred for any transmission even when propagation delay is ignored. We study the behavior of this transmission delay and show that the delay scales linearly with the Euclidean distance between the sender and the receiver when the network is in the subcritical phase, and the delay scales sub-linearly with the distance if the network is in the supercritical phase."
]
}
|
1012.5059
|
2122864366
|
We discuss an algebraic approach to propositional logic with side effects. To this end, we use Hoare’s conditional [1985], which is a ternary connective comparable to if-then-else. Starting from McCarthy’s notion of sequential evaluation [1963] we discuss a number of valuation congruences and we introduce Hoare-McCarthy algebras as the structures that characterize these congruences.
|
Further results from @cite_4 concern binary connectives: we prove that the conditional connective cannot be expressed modulo @math (or any finer congruence) if only binary connectives are allowed, but that it can be expressed modulo @math (and @math ); for @math we leave this question open. In the papers @cite_4 @cite_1 we use the notation @math (taken from @cite_6 ) for left-sequential conjunction, defined by [x y=y x , ] and elaborate on the connection between sequential binary connectives, the conditional and negation, defined by [ x= x . ] @cite_1 we define various : the fragments of proposition algebra that remain if only @math and @math can be used. These logics (various choices can be made) are put forward for modeling conditions as used in programming. Typical laws that are valid with respect to each valuation congruence are the associativity of @math , the double negation shift, and @math (and, as explained in the Introduction, a typical non-validity is @math ).
|
{
"cite_N": [
"@cite_1",
"@cite_4",
"@cite_6"
],
"mid": [
"",
"2117506990",
"2109246080"
],
"abstract": [
"",
"We propose a combination of Kleene’s three-valued logic and ACP process algebra via the guarded commandconstruct. We present an operational semantics in SOS-style, and a completeness result. © 1998 Elsevier Science B.V. All rights reserved.",
"ABSTRACT In this paper, we survey 3-valued logics and their complete axiomatizations, one of which may be new. We then propose a 4-valued, functionally complete logic that incorporates these 3-valued systems and provide notations for interesting operators and subsystems."
]
}
|
1012.5396
|
2155085077
|
It is popular nowadays to bring techniques from bibliometrics and scientometrics into the world of digital libraries to explore mechanisms which underlie community development. In this paper we use the DBLP data to investigate the author's scientific career, and analyze some of the computer science communities. We compare them in terms of productivity and population stability, and use these features to compare the sets of top-ranked conferences with their lower ranked counterparts.
|
Besides the network property analysis there is an interest in research related to the topic development and distribution in scientific community. Z " a ine, Chen and Goebel @cite_1 used collaboration network embedded in DBLP to discover topical connections between the network members and eventually use them in a recommendation system. Another investigation connecting topics and co-authors community has been reported in @cite_3 . The work used CiteSeer as a testbed and aimed at getting insight in topic evolution and connection between the researchers and topics.
|
{
"cite_N": [
"@cite_1",
"@cite_3"
],
"mid": [
"2139836297",
"1975594954"
],
"abstract": [
"Extracting information from very large collections of structured, semi-structured or even unstructured data can be a considerable challenge when much of the hidden information is implicit within relationships among entities in the data. Social networks are such data collections in which relationships play a vital role in the knowledge these networks can convey. A bibliographic database is an essential tool for the research community, yet finding and making use of relationships comprised within such a social network is difficult. In this paper we introduce DBconnect, a prototype that exploits the social network coded within the DBLP database by drawing on a new random walk approach to reveal interesting knowledge about the research community and even recommend collaborations.",
"We propose a method for discovering the dependency relationships between the topics of documents shared in social networks using the latent social interactions, attempting to answer the question: given a seemingly new topic, from where does this topic evolve? In particular, we seek to discover the pair-wise probabilistic dependency in topics of documents which associate social actors from a latent social network, where these documents are being shared. By viewing the evolution of topics as a Markov chain, we estimate a Markov transition matrix of topics by leveraging social interactions and topic semantics. Metastable states in a Markov chain are applied to the clustering of topics. Applied to the CiteSeer dataset, a collection of documents in academia, we show the trends of research topics, how research topics are related and which are stable. We also show how certain social actors, authors, impact these topics and propose new ways for evaluating author impact."
]
}
|
1012.5396
|
2155085077
|
It is popular nowadays to bring techniques from bibliometrics and scientometrics into the world of digital libraries to explore mechanisms which underlie community development. In this paper we use the DBLP data to investigate the author's scientific career, and analyze some of the computer science communities. We compare them in terms of productivity and population stability, and use these features to compare the sets of top-ranked conferences with their lower ranked counterparts.
|
Yet another branch of investigation aims at evaluation of scientific venues. The first attempts relied heavily on the citation networks @cite_16 @cite_4 . However as citations are not always available in bibliographic databases other approaches have been proposed. @cite_22 criteria for evaluation of program committee members has been developed and successfully applied for ranking conferences recorded in CiteSeer. Yan and Lee @cite_14 suggested recently a way of ranking venues based on the scientific contribution of individual scholars. The method has been evaluated on ACM and DBLP data sets.
|
{
"cite_N": [
"@cite_14",
"@cite_16",
"@cite_4",
"@cite_22"
],
"mid": [
"2145305331",
"2101514744",
"1503953508",
"2099065962"
],
"abstract": [
"Ranking of publication venues is often closely related with important issues such as evaluating the contributions of individual scholars research groups, or subscription decision making. The development of large-scale digital libraries and the availability of various meta data provide the possibility of building new measures more efficiently and accurately. In this work, we propose two novel measures for ranking the impacts of academic venues an easy-to-implement seed-based measure that does not use citation analysis, and a realistic browsing-based measure that takes an article reader's behavior into account. Both measures are computationally efficient yet mimic the results of the widely accepted Impact Factor. In particular, our proposal exploits the fact that: (1)in most disciplines, there are \"top\" venues that most people agree on; and (2) articles that appeared in good venues are more likely to be viewed by readers. Our proposed measures are extensively evaluated on a test case of the Database research community using two real bibliography data sets - ACM and DBLP. Finally, ranks of venues by our proposed measures are compared against the Impact Factor using the Spearman's rank correlation coefficient, and their positive rank order relationship is proved with a statistical significance test.",
"Acknowledgments in research publications, like citations, indicate influential contributions to scientific work. However, acknowledgments are different from citations; whereas citations are formal expressions of debt, acknowledgments are arguably more personal, singular, or private expressions of appreciation and contribution. Furthermore, many sources of research funding expect researchers to acknowledge any support that contributed to the published work. Just as citation indexing proved to be an important tool for evaluating research contributions, we argue that acknowledgments can be considered as a metric parallel to citations in the academic audit process. We have developed automated methods for acknowledgment extraction and analysis and show that combining acknowledgment analysis with citation indexing yields a measurable impact of the efficacy of various individuals as well as government, corporate, and university sponsors of scientific work.",
"We propose a popularity weighted ranking algorithm for academic digital libraries that uses the popularity factor of a publication venue overcoming the limitations of impact factors. We compare our method with the naive PageRank, citation counts and HITS algorithm, three popular measures currently used to rank papers beyond lexical similarity. The ranking results are evaluated by discounted cumulative gain(DCG) method using four human evaluators. We show that our proposed ranking algorithm improves the DCG performance by 8.5 on average compared to naive PageRank, 16.3 compared to citation count and 23.2 compared to HITS. The algorithm is also evaluated by click through data from CiteSeer usage log.",
"Bibliometrics are important measures for venue quality in digital libraries. Impacts of venues are usually the major consideration for subscription decision-making, and for ranking and recommending high-quality venues and documents. For digital libraries in the Computer Science literature domain, conferences play a major role as an important publication and dissemination outlet. However, with a recent profusion of conferences and rapidly expanding fields, it is increasingly challenging for researchers and librarians to assess the quality of conferences. We propose a set of novel heuristics to automatically discover prestigious (and low-quality) conferences by mining the characteristics of Program Committee members. We examine the proposed cues both in isolation and combination under a classification scheme. Evaluation on a collection of 2,979 conferences and 16,147 PC members shows that our heuristics, when combined, correctly classify about 92 of the conferences, with a low false positive rate of 0.035 and a recall of more than 73 for identifying reputable conferences. Furthermore, we demonstrate empirically that our heuristics can also effectively detect a set of low-quality conferences, with a false positive rate of merely 0.002. We also report our experience of detecting two previously unknown low-quality conferences. Finally, we apply the proposed techniques to the entire quality spectrum by ranking conferences in the collection."
]
}
|
1012.4691
|
2953056599
|
This paper introduces a three-phase heuristic approach for a large-scale energy management and maintenance scheduling problem. The problem is concerned with scheduling maintenance and refueling for nuclear power plants up to five years into the future, while handling a number of scenarios for future demand and prices. The goal is to minimize the expected total production costs. The first phase of the heuristic solves a simplified constraint programming model of the problem, the second performs a local search, and the third handles overproduction in a greedy fashion. This work was initiated in the context of the ROADEF EURO Challenge 2010, a competition organized jointly by the French Operational Research and Decision Support Society, the European Operational Research Society, and the European utility company Electricite de France. In the concluding phase of the competition our team ranked second in the junior category and sixth overall. After correcting an implementation bug in the program that was submitted for evaluation, our heuristic solves all ten real-life instances, and the solutions obtained are all within 2.45 of the currently best known solutions. The results given here would have ranked first in the original competition.
|
A problem similar to the one studied here has been considered in @cite_6 by in 1997. They consider roughly the same scheduling problem as is handled here and formulate a mixed integer programming model. In their model there is no decision variable concerning refueling amounts; this decision is instead handled as a predefined fixed amount. There is also no uncertainty of future demand or prices, and the demand is given per week, in contrast to the competition where the electricity demand is given per time step, i.e., their discretization is more coarse-grained.
|
{
"cite_N": [
"@cite_6"
],
"mid": [
"1984494184"
],
"abstract": [
"Abstract The problem addressed here is scheduling the shutdown for refueling and maintenance of nuclear power plants. The models have up to four reactors requiring of the order of five shutdowns each over a five-year time horizon. The resulting mixed-integer program is large and complex with interesting structure. We show good results using a mixed-integer optimizer taking advantage of a strong linear programming formulation."
]
}
|
1012.4691
|
2953056599
|
This paper introduces a three-phase heuristic approach for a large-scale energy management and maintenance scheduling problem. The problem is concerned with scheduling maintenance and refueling for nuclear power plants up to five years into the future, while handling a number of scenarios for future demand and prices. The goal is to minimize the expected total production costs. The first phase of the heuristic solves a simplified constraint programming model of the problem, the second performs a local search, and the third handles overproduction in a greedy fashion. This work was initiated in the context of the ROADEF EURO Challenge 2010, a competition organized jointly by the French Operational Research and Decision Support Society, the European Operational Research Society, and the European utility company Electricite de France. In the concluding phase of the competition our team ranked second in the junior category and sixth overall. After correcting an implementation bug in the program that was submitted for evaluation, our heuristic solves all ten real-life instances, and the solutions obtained are all within 2.45 of the currently best known solutions. The results given here would have ranked first in the original competition.
|
Besides the work by very little is published on the topic. Nuclear maintenance and refueling is mentioned by in @cite_8 , but the problem considered is to minimize the environmental impact.
|
{
"cite_N": [
"@cite_8"
],
"mid": [
"2080382509"
],
"abstract": [
"The New York Power Authority (NYPA) wanted to develop a 10-year schedule for refueling its Indian Point 3 Nuclear Power Plant (IP3) that balanced fish protection, which occurs when IP3 is shut down for refueling, and the costs of buying and loading fuel. We developed a decision analysis model to compare alternative strategies for refueling. It explicitly considered key uncertainties associated with future operation: how well IP3 operates, how long it takes to refuel, and when New York State is likely to deregulate the electric utility industry. The NYPA decision makers used the model to reinforce their choice of a refueling strategy. They were not surprised that more fish protection occurred with strategies that restricted the starting date for refueling to the third week in May, rather than allowing the starting date to float throughout the period from May through August. However, the decision makers were surprised that the more restrictive strategies also resulted in lower costs."
]
}
|
1012.4691
|
2953056599
|
This paper introduces a three-phase heuristic approach for a large-scale energy management and maintenance scheduling problem. The problem is concerned with scheduling maintenance and refueling for nuclear power plants up to five years into the future, while handling a number of scenarios for future demand and prices. The goal is to minimize the expected total production costs. The first phase of the heuristic solves a simplified constraint programming model of the problem, the second performs a local search, and the third handles overproduction in a greedy fashion. This work was initiated in the context of the ROADEF EURO Challenge 2010, a competition organized jointly by the French Operational Research and Decision Support Society, the European Operational Research Society, and the European utility company Electricite de France. In the concluding phase of the competition our team ranked second in the junior category and sixth overall. After correcting an implementation bug in the program that was submitted for evaluation, our heuristic solves all ten real-life instances, and the solutions obtained are all within 2.45 of the currently best known solutions. The results given here would have ranked first in the original competition.
|
Setting production levels for power plants is treated in the literature under the term economic dispatch' --- i.e., the problem of dispatching units to producing power in an economic way to minimize production costs. While many different settings are considered, see for example @cite_5 by Chowdhury and Rahman, there are new features in the production planning for nuclear power plants. The new features concern special bounds on production levels when the fuel level is low, which leads to nonlinear constraints. When a type 2 plant's fuel level drops below a given treshold, a decreasing power production level is imposed. Without this constraint the production planning could be solved with linear programming.
|
{
"cite_N": [
"@cite_5"
],
"mid": [
"2111459478"
],
"abstract": [
"A survey is presented of papers and reports that address various aspects of economic dispatch. The time period considered is 1977-88. Four related areas of economic dispatch are identified and papers published in the general areas of economic dispatch are classified into these. These areas are: optimal power flow, economic dispatch in relation to AGC, dynamic dispatch, and economic dispatch with nonconventional generation sources. >"
]
}
|
1012.4815
|
1640366235
|
We consider a single station (STA) in the Power Save Mode (PSM) of an IEEE 802.11 infrastructure WLAN. This STA is assumed to be carrying uplink and downlink traffic via the access point (AP). We assume that the transmission queues of the AP and the STA are saturated, i.e., the AP and the STA always have at least one packet to send. For this scenario, it is observed that uplink and downlink throughputs achieved are different. The reason behind the difference is the long term attempt rates of the STA and the AP due to the PSM protocol. In this paper we first obtain the the long term attempt rates of the STA and the AP and using these, we obtain the saturation throughputs of the AP and the STA. We provide a validation of analytical results using the NS-2 simulator.
|
In a seminal paper, Bianchi @cite_1 proposed an approximate model for the throughput performance of a single-cell IEEE 802.11 network that uses DCF as the medium access mechanism and in which all the nodes are saturated. @cite_2 extended the model and provided some new insights. In both of these papers, the authors evaluated the attempt probability @math (also, the long term attempt rate) in a system slot as a function of the number of contending nodes. While in this paper, we obtain the attempt probability and saturation throughput for a scenario, which requires different analysis than presented in @cite_1 and @cite_2 , due to the different behavior of the PSM. In our earlier submission @cite_9 , we focussed on the performance of PSM under application level downlink traffic over TCP. While in this paper, we consider both uplink and downlink traffic, which is different from TCP. We analyze attempt probability of the AP and the STA and saturation throughputs, which makes this paper more basic than our earlier submission @cite_9 .
|
{
"cite_N": [
"@cite_9",
"@cite_1",
"@cite_2"
],
"mid": [
"",
"2162598825",
"2166657206"
],
"abstract": [
"",
"The IEEE has standardized the 802.11 protocol for wireless local area networks. The primary medium access control (MAC) technique of 802.11 is called the distributed coordination function (DCF). The DCF is a carrier sense multiple access with collision avoidance (CSMA CA) scheme with binary slotted exponential backoff. This paper provides a simple, but nevertheless extremely accurate, analytical model to compute the 802.11 DCF throughput, in the assumption of finite number of terminals and ideal channel conditions. The proposed analysis applies to both the packet transmission schemes employed by DCF, namely, the basic access and the RTS CTS access mechanisms. In addition, it also applies to a combination of the two schemes, in which packets longer than a given threshold are transmitted according to the RTS CTS mechanism. By means of the proposed model, we provide an extensive throughput performance evaluation of both access mechanisms of the 802.11 protocol.",
"We study a fixed-point formalization of the well-known analysis of Bianchi. We provide a significant simplification and generalization of the analysis. In this more general framework, the fixed-point solution and performance measures resulting from it are studied. Uniqueness of the fixed point is established. Simple and general throughput formulas are provided. It is shown that the throughput of any flow will be bounded by the one with the smallest transmission rate. The aggregate throughput is bounded by the reciprocal of the harmonic mean of the transmission rates. In an asymptotic regime with a large number of nodes, explicit formulas for the collision probability, the aggregate attempt rate, and the aggregate throughput are provided. The results from the analysis are compared with ns2 simulations and also with an exact Markov model of the backoff process. It is shown how the saturated network analysis can be used to obtain TCP transfer throughputs in some cases."
]
}
|
1012.4815
|
1640366235
|
We consider a single station (STA) in the Power Save Mode (PSM) of an IEEE 802.11 infrastructure WLAN. This STA is assumed to be carrying uplink and downlink traffic via the access point (AP). We assume that the transmission queues of the AP and the STA are saturated, i.e., the AP and the STA always have at least one packet to send. For this scenario, it is observed that uplink and downlink throughputs achieved are different. The reason behind the difference is the long term attempt rates of the STA and the AP due to the PSM protocol. In this paper we first obtain the the long term attempt rates of the STA and the AP and using these, we obtain the saturation throughputs of the AP and the STA. We provide a validation of analytical results using the NS-2 simulator.
|
@cite_5 , Lei and Nilsson @cite_12 , Baek and Choi @cite_7 and @cite_11 evaluated the energy performance of PSM, but none of them attempt to obtain the saturation attempt rates of the STA and the AP. Apart from this, in all the above papers, authors consider the PSM protocol implementation which is not practical. They consider the following sequence of frame exchanges: First the PSM STA sends the PS-POLL frame through contention, after SIFS AP sends the data packet and after SIFS again the STA sends the MAC ACK. So the AP does not contend to send data. In the presence of traffic from the AP to other STAs, when the AP receives the PS-POLL frame, some packets might be already present in the NIC queue of the AP, and these packets need to be sent first. So the above sequence of frame exchange cannot work under the scenario just described. We consider a different implementation of PSM protocol which is explained in the next section.
|
{
"cite_N": [
"@cite_5",
"@cite_7",
"@cite_12",
"@cite_11"
],
"mid": [
"2792764963",
"1972531949",
"2124035473",
"2086672759"
],
"abstract": [
"",
"For the battery-powered wireless stations, power saving is one of the significant issues in IEEE 802.11 wireless local area network (WLAN). Recently, Lei and Nilsson investigated the power save mode in IEEE 802.11 infrastructure mode by an M G 1 queue with bulk service. They obtained the average packet delay and lower and upper bounds of the average Percentage of Time a station stays in the Doze state (PTD). In this paper, the further investigation of power save mode is done; simple derivation of the average and the variance of packet delay and the exact value of PTD are obtained. Numerical results show that our analytic results for PTD match quite well with simulation results. Using our performance analysis, we can find the maximal listen interval which minimizes the power consumption of a station while satisfying the required quality of service (QoS) on the average and the variance of packet delay.",
"Energy efficiency is an important issue in wireless networks. We investigate the power management scheme in the IEEE 802.11 based infrastructure WLANs (wireless local area network) in order to find the optimal parameters that can achieve good energy efficiency without degrading other performances. With power management, an AP (Access Point) might temporarily buffer packets whose destination stations are in the Doze state. We focus our study on the behavior of the buffered packets and their impacts on the performance metrics. We model the power management scheme as an M G 1 queue with bulk service and obtain the analytical results for the energy efficiency and the response time performance metrics, which are controlled by the listen interval. Our simulation results are in good agreement with the analysis. Based on the analytical model and simulation, we propose to select the largest listen interval with the satisfaction of the response time requirement.",
"In this paper, we focus on the Markov model of IEEE 802.11 distributed coordination function with power saving mode. The throughput evaluation is based on the model, with the comparison with the simulation result for validation. Furthermore, we notice that after each beacon transmission, there're more stations in contention than usual. Excessive contention results in high collision probability and low throughput. To solve this problem, we propose a downlink access scheme which can be used in both the basic access scheme and the access scheme with RTS CTS. This novel scheme enables AP to constrain the number of stations in contention to be an optimal value. Simulation results prove the performance improvement declared. The proposed scheme can be extended for multi-rate or multi-service WLANs."
]
}
|
1012.4815
|
1640366235
|
We consider a single station (STA) in the Power Save Mode (PSM) of an IEEE 802.11 infrastructure WLAN. This STA is assumed to be carrying uplink and downlink traffic via the access point (AP). We assume that the transmission queues of the AP and the STA are saturated, i.e., the AP and the STA always have at least one packet to send. For this scenario, it is observed that uplink and downlink throughputs achieved are different. The reason behind the difference is the long term attempt rates of the STA and the AP due to the PSM protocol. In this paper we first obtain the the long term attempt rates of the STA and the AP and using these, we obtain the saturation throughputs of the AP and the STA. We provide a validation of analytical results using the NS-2 simulator.
|
Krashinsky and Balakrishnan @cite_10 and Quiao and Shin @cite_4 focus on the interaction of the TCP slow start, RTT and PSM. @cite_0 propose a way to minimize energy and delay by scheduling and informing the schedule to STAs through beacon frames. @cite_3 propose to take advantage of throttling done by the TCP server in media streaming applications. In all of these papers, the authors focus on the energy saving either by modifying the PSM protocol @cite_0 , or by modifying the sleep wake schedule of the radio depending upon the characteristics of the application level traffic @cite_10 , @cite_4 and @cite_3 .
|
{
"cite_N": [
"@cite_0",
"@cite_3",
"@cite_10",
"@cite_4"
],
"mid": [
"2007073292",
"2045258279",
"2040899944",
"2114775950"
],
"abstract": [
"Power conservation is a general concern for mobile computing and communication. In this paper, we investigate the performance of the current 802.11 power saving mechanism (unscheduled PSM) and demonstrate that background network traffic can have a significant impact on the power consumption of mobile stations. To improve power efficiency, a scheduled PSM protocol based on time slicing is proposed in this paper. The protocol adopts the mechanism of time division, schedules the access point to deliver pending data at designated time slices, and adaptively adjusts the power state of the mobile stations. The proposed scheme is near theoretical optimal for power saving in the sense that it greatly reduces the effect of background traffic, minimizes the station idle time, and maximizes its energy utilization. Comprehensive analysis and simulations are conducted to evaluate the new protocol. Our results show that it provides significant energy saving over the unscheduled PSM, particularly in circumstances where multiple traffic streams coexist in a network. Moreover, it achieves the saving at the cost of only a slight degradation of the one-way delay performance.",
"While the 802.11 power saving mode (PSM) and its enhancements can reduce power consumption by putting the wireless network interface (WNI) into sleep as much as possible, they either require additional infrastructure support, or may degrade the transmission throughput and cause additional transmission delay. These schemes are not suitable for long and bulk data transmissions with strict QoS requirements on wireless devices. With increasingly abundant bandwidth available on the Internet, we have observed that TCP congestion control is often not a constraint of bulk data transmissions as bandwidth throttling is widely used in practice. In this paper, instead of further manipulating the trade-off between the power saving and the incurred delay, we effectively explore the power saving potential by considering the bandwidth throttling on streaming downloading servers. We propose an application-independent protocol, called PSM-throttling. With a quick detection on the TCP flow throughput, a client can identify bandwidth throttling connections with a low cost Since the throttling enables us to reshape the TCP traffic into periodic bursts with the same average throughput as the server transmission rate, the client can accurately predict the arriving time of packets and turn on off the WNI accordingly. PSM-throttling can minimize power consumption on TCP-based bulk traffic by effectively utilizing available Internet bandwidth without degrading the application's performance perceived by the user. Furthermore, PSM-throttling is client-centric, and does not need any additional infrastructure support. Our lab-environment and Internet-based evaluation results show that PSM-throttling can effectively improve energy savings (by up to 75 ) and or the QoS for a broad types of TCP-based applications, including streaming, pseudo streaming, and large file downloading, over existing PSM-like methods.",
"On many battery-powered mobile computing devices, the wireless network is a significant contributor to the total energy consumption. In this paper, we investigate the interaction between energy-saving protocols and TCP performance for Web-like transfers. We show that the popular IEEE 802.11 power-saving mode (PSM), a \"static\" protocol, can harm performance by increasing fast round trip times (RTTs) to 100 ms; and that under typical Web browsing workloads, current implementations will unnecessarily spend energy waking up during long idle periods.To overcome these problems, we present the Bounded-Slowdown (BSD) protocol, a PSM that dynamically adapts to network activity. BSD is an optimal solution to the problem of minimizing energy consumption while guaranteeing that a connection's RTT does not increase by more than a factor p over its base RTT, where p is a protocol parameter that exposes the trade-off between minimizing energy and reducing latency. We present several trace-driven simulation results that show that, compared to a static PSM, the Bounded-Slowdown protocol reduces average Web page retrieval times by 5-64 , while simultaneously reducing energy consumption by 1-14 (and by 13× compared to no power management).",
"Static PSM (power-saving mode) schemes employed in the current IEEE 802.11 implementations could not provide any delay-performance guarantee because of their fixed wakeup intervals. In this paper, we propose a smart PSM (SPSM) scheme, which directs a wireless station to sleep wake up according to an \"optimal\" sequence, such that the desired delay performance is guaranteed with minimum energy consumption. Instead of constructing the sequence directly, SPSM takes a unique two-step approach. First, it translates an arbitrary user-desired delay performance into a generic penalty function. Second, it provides a generic algorithm that takes the penalty function as the input and produces the optimal station action sequence automatically. This way, the potentially-complicated energy-consumption-minimization problem subject to delay-performance constraints is simplified and solved systematically. Our simulation results show that, with a two-stair penalty function, SPSM achieves delay performance similar to the BSD (bounded slowdown) protocol under various scenarios, but always with less energy consumption, thanks to its capability to adapt to changes in the response-time distribution. Moreover, because of SPSM's two-step design feature, it is more flexible than BSD in the sense of being able to meet arbitrary user-desired delay requirement, e.g., providing soft delay-bound guarantees with power penalty functions."
]
}
|
1012.3889
|
2170185703
|
Due to the lack of coordination, it is unlikely that the selfish players of a strategic game reach a socially good state. A possible way to cope with selfishness is to compute a desired outcome (if it is tractable) and impose it. However this answer is often inappropriate because compelling an agent can be costly, unpopular or just hard to implement. Since both situations (no coordination and full coordination) show opposite advantages and drawbacks, it is natural to study possible tradeoffs. In this paper we study a strategic game where the nodes of a simple graph G are independent agents who try to form pairs: e.g. jobs and applicants, tennis players for a match, etc. In many instances of the game, a Nash equilibrium significantly deviates from a social optimum. We analyze a scenario where we fix the strategy of some players; the other players are free to make their choice. The goal is to compel a minimum number of players and guarantee that any possible equilibrium of the modified game is a social optimum, i.e. created pairs must form a maximum matching of G. We mainly show that this intriguing problem is NP-hard and propose an approximation algorithm with a constant ratio.
|
The mfv problem is related to the well known stable marriage problem ( smp ) @cite_11 . In the smp there are @math women and @math men who rank the persons of the opposite sex in a strict order of preference. A solution is a matching of size @math ; it is unstable if two participants prefer being together than being with their respective partner. Interestingly a stable matching always exists and one can compute it with the algorithm of Gale and Shapley @cite_11 . Many variants of the smp were studied in the literature: all participants are of the same gender (the stable roommates problem ) @cite_1 , ties in preferences are allowed @cite_6 , players can give an incomplete list @cite_6 , etc. In fact the mfv problem has some similarities with the stable roommates problem with simplified preferences: every participant gives a list of equivalent interchangeable partners, omitting only those persons he would never accept under any circumstances.
|
{
"cite_N": [
"@cite_1",
"@cite_6",
"@cite_11"
],
"mid": [
"2068574579",
"1480100487",
"2068115726"
],
"abstract": [
"The stable marriage problem is that of matching n men and n women, each of whom has ranked the members of the opposite sex in order of preference, so that no unmatched couple both prefer each other to their partners under the matching. At least one stable matching exists for every stable marriage instance, and efficient algorithms for finding such a matching are well known. The stable roommates problem involves a single set of even cardinality n, each member of which ranks all the others in order of preference. A stable matching is now a partition of this single set into n2 pairs so that no two unmatched members both prefer each other to their partners under the matching. In this case, there are problem instances for which no stable matching exists. However, the present paper describes an O(n2) algorithm that will determine, for any instance of the problem, whether a stable matching exists, and if so, will find such a matching.",
"The original stable marriage problem requires all men and women to submit a complete and strictly ordered preference list. This is obviously often unrealistic in practice, and several relaxations have been proposed, including the following two common ones: one is to allow an incomplete list, i.e., a man is permitted to accept only a subset of the women and vice versa. The other is to allow a preference list including ties. Fortunately, it is known that both relaxed problems can still be solved in polynomial time. In this paper, we show that the situation changes substantially if we allow both relaxations (incomplete lists and ties) at the same time: the problem not only becomes NP-hard, but also the optimal cost version has no approximation algorithm achieving the approximation ratio of N1-Ɛ, where N is the instance size, unless P=NP.",
""
]
}
|
1012.3189
|
2048772520
|
We study the stochastic versions of a broad class of combinatorial problems where the weights of the elements in the input dataset are uncertain. The class of problems that we study includes shortest paths, minimum weight spanning trees, and minimum weight matchings over probabilistic graphs, and other combinatorial problems like knapsack. We observe that the expected value is inadequate in capturing different types of risk-averse or risk-prone behaviors, and instead we consider a more general objective which is to maximize the expected utility of the solution for some given utility function, rather than the expected weight (expected weight becomes a special case). We show that we can obtain a polynomial time approximation algorithm with additive error @math for any @math , if there is a pseudopolynomial time algorithm for the exact version of the problem (This is true for the problems mentioned above)and the maximum value of the utility function is bounded by a constant. Our result generalizes several prior results on stochastic shortest path, stochastic spanning tree, and stochastic knapsack. Our algorithm for utility maximization makes use of the separability of exponential utility and a technique to decompose a general utility function into exponential utility functions, which may be useful in other stochastic optimization problems.
|
This work is partially inspired by our prior work on top- @math and other queries over probabilistic datasets @cite_3 @cite_43 . In fact, we can show that both the consensus answers proposed in @cite_3 and the parameterized ranking functions proposed in @cite_43 follow the expected utility maximization principle where the utility functions are materialized as distance metrics for the former and the weight functions for the latter. Our technique for approximating the utility functions is also similar to the approximation scheme used in @cite_43 in spirit. However, no performance guarantees are provided in that work.
|
{
"cite_N": [
"@cite_43",
"@cite_3"
],
"mid": [
"2120342618",
"2128230033"
],
"abstract": [
"The dramatic growth in the number of application domains that naturally generate probabilistic, uncertain data has resulted in a need for efficiently supporting complex querying and decision-making over such data. In this paper, we present a unified approach to ranking and top-k query processing in probabilistic databases by viewing it as a multi-criteria optimization problem, and by deriving a set of features that capture the key properties of a probabilistic dataset that dictate the ranked result. We contend that a single, specific ranking function may not suffice for probabilistic databases, and we instead propose two parameterized ranking functions, called PRFω and PRFe, that generalize or can approximate many of the previously proposed ranking functions. We present novel generating functions-based algorithms for efficiently ranking large datasets according to these ranking functions, even if the datasets exhibit complex correlations modeled using probabilistic and xor trees or Markov networks. We further propose that the parameters of the ranking function be learned from user preferences, and we develop an approach to learn those parameters. Finally, we present a comprehensive experimental study that illustrates the effectiveness of our parameterized ranking functions, especially PRFe, at approximating other ranking functions and the scalability of our proposed algorithms for exact or approximate ranking.",
"We address the problem of finding a \"best\" deterministic query answer to a query over a probabilistic database. For this purpose, we propose the notion of a consensus world (or a consensus answer) which is a deterministic world (answer) that minimizes the expected distance to the possible worlds (answers). This problem can be seen as a generalization of the well-studied inconsistent information aggregation problems (e.g. rank aggregation) to probabilistic databases. We consider this problem for various types of queries including SPJ queries, Top-k ranking queries, group-by aggregate queries, and clustering. For different distance metrics, we obtain polynomial time optimal or approximation algorithms for computing the consensus answers (or prove NP-hardness). Most of our results are for a general probabilistic database model, called and xor tree model, which significantly generalizes previous probabilistic database models like x-tuples and block-independent disjoint models, and is of independent interest."
]
}
|
1012.3189
|
2048772520
|
We study the stochastic versions of a broad class of combinatorial problems where the weights of the elements in the input dataset are uncertain. The class of problems that we study includes shortest paths, minimum weight spanning trees, and minimum weight matchings over probabilistic graphs, and other combinatorial problems like knapsack. We observe that the expected value is inadequate in capturing different types of risk-averse or risk-prone behaviors, and instead we consider a more general objective which is to maximize the expected utility of the solution for some given utility function, rather than the expected weight (expected weight becomes a special case). We show that we can obtain a polynomial time approximation algorithm with additive error @math for any @math , if there is a pseudopolynomial time algorithm for the exact version of the problem (This is true for the problems mentioned above)and the maximum value of the utility function is bounded by a constant. Our result generalizes several prior results on stochastic shortest path, stochastic spanning tree, and stochastic knapsack. Our algorithm for utility maximization makes use of the separability of exponential utility and a technique to decompose a general utility function into exponential utility functions, which may be useful in other stochastic optimization problems.
|
There is a large volume of work on approximating functions using short exponential sums over a bounded domain, e.g., @cite_26 @cite_11 @cite_29 @cite_30 . Some works also consider using linear combinations of Gaussians or other kernels to approximate functions with finite support over the entire real axis @math @cite_35 . This is however impossible using exponentials since @math is either periodic (if @math ) or approaches to infinity when @math or @math (if @math ).
|
{
"cite_N": [
"@cite_30",
"@cite_35",
"@cite_26",
"@cite_29",
"@cite_11"
],
"mid": [
"2078691815",
"2019247511",
"2081182772",
"2078132413",
"2008979617"
],
"abstract": [
"Abstract We revisit the efficient approximation of functions by sums of exponentials or Gaussians in Beylkin and Monzon (2005) [16] to discuss several new results and applications of these approximations. By using the Poisson summation to discretize integral representations of e.g., power functions r − β , β > 0 , we obtain approximations with uniform relative error on the whole real line. Our approach is applicable to a class of functions and, in particular, yields a separated representation for the function e − x y . As a result, we obtain sharper error estimates and a simpler method to derive trapezoidal-type quadratures valid on finite intervals. We also introduce a new reduction algorithm for the case where our representation has an excessive number of terms with small exponents. As an application of these new estimates, we simplify and improve previous results on separated representations of operators with radial kernels. For any finite but arbitrary accuracy, we obtain new separated representations of solutions of Laplace's equation satisfying boundary conditions on the half-space or the sphere. These representations inherit a multiresolution structure from the Gaussian approximation leading to fast algorithms for the evaluation of the solutions. In the case of the sphere, our approach provides a foundation for a new multiresolution approach to evaluating and estimating models of gravitational potentials used for satellite orbit computations.",
"This textbook is designed for graduate students in mathematics, physics, engineering, and computer science. Its purpose is to guide the reader in exploring contemporary approximation theory. The emphasis is on multi-variable approximation theory, i.e., the approximation of functions in several variables, as opposed to the classical theory of functions in one variable. Most of the topics in the book, heretofore accessible only through research papers, are treated here from the basics to the currently active research, often motivated by practical problems arising in diverse applications such as science, engineering, geophysics, and business and economics. Among these topics are projections, interpolation paradigms, positive definite functions, interpolation theorems of Schoenberg and Micchelli, tomography, artificial neural networks, wavelets, thin-plate splines, box splines, ridge functions, and convolutions. An important and valuable feature of the book is the bibliography of almost 600 items directing the reader to important books and research papers. There are 438 problems and exercises scattered through the book allowing the student reader to get a better understanding of the subject.",
"A modification of the classical technique of Prony for fitting sums of exponential functions to data is considered. The method maximizes the likelihood for the problem (unlike the usual implementation of Prony’s method, which is not even consistent for transient signals), proves to be remarkably effective in practice, and is supported by an asymptotic stability result. Novel features include a discussion of the problem parametrization and its implications for consistency. The asymptotic convergence proofs are made possible by an expression for the algorithm in terms of circulant divided difference operators.",
"Abstract We introduce a new approach, and associated algorithms, for the efficient approximation of functions and sequences by short linear combinations of exponential functions with complex-valued exponents and coefficients. These approximations are obtained for a finite but arbitrary accuracy and typically have significantly fewer terms than Fourier representations. We present several examples of these approximations and discuss applications to fast algorithms. In particular, we show how to obtain a short separated representation (sum of products of one-dimensional functions) of certain multi-dimensional Green's functions.",
"Abstract We introduce new families of Gaussian-type quadratures for weighted integrals of exponential functions and consider their applications to integration and interpolation of bandlimited functions. We use a generalization of a representation theorem due to Caratheodory to derive these quadratures. For each positive measure, the quadratures are parameterized by eigenvalues of the Toeplitz matrix constructed from the trigonometric moments of the measure. For a given accuracy ϵ, selecting an eigenvalue close to ϵ yields an approximate quadrature with that accuracy. To compute its weights and nodes, we present a new fast algorithm. These new quadratures can be used to approximate and integrate bandlimited functions, such as prolate spheroidal wave functions, and essentially bandlimited functions, such as Bessel functions. We also develop, for a given precision, an interpolating basis for bandlimited functions on an interval."
]
}
|
1012.3452
|
1507707327
|
Almost all of the current process scheduling algorithms which are used in modern operating systems (OS) have their roots in the classical scheduling paradigms which were developed during the 1970's. But modern computers have different types of software loads and user demands. We think it is important to run what the user wants at the current moment. A user can be a human, sitting in front of a desktop machine, or it can be another machine sending a request to a server through a network connection. We think that OS should become intelligent to distinguish between different processes and allocate resources, including CPU, to those processes which need them most. In this work, as a first step to make the OS aware of the current state of the system, we consider process dependencies and interprocess communications. We are developing a model, which considers the need to satisfy interactive users and other possible remote users or customers, by making scheduling decisions based on process dependencies and interprocess communications. Our simple proof of concept implementation and experiments show the effectiveness of this approach in the real world applications. Our implementation does not require any change in the software applications nor any special kind of configuration in the system, Moreover, it does not require any additional information about CPU needs of applications nor other resource requirements. Our experiments show significant performance improvement for real world applications. For example, almost constant average response time for Mysql data base server and constant frame rate for mplayer under different simulated load values.
|
Windows @cite_7 also uses windows system input focus'' as a measure of user interaction and it increases the priority of a process which has the input focus. Using input focus may help to improve interactivity performance but has several problems. If a user is running multiple interactive programs, for example an audio player and a web browser, while he she is browsing the web and input focus is on the web browser, the user still wants the audio player to play the music well. Input focus mechanism also might not be usefull if a user interacts with the system through the network.
|
{
"cite_N": [
"@cite_7"
],
"mid": [
"110456303"
],
"abstract": [
"See how the core components of the Windows operating system work behind the scenesguided by a team of internationally renowned internals experts. Fully updated for Windows Server 2008 and Windows Vista, this classic guide delivers key architectural insights on system design, debugging, performance, and supportalong with hands-on experiments to experience Windows internal behavior firsthand. Delve inside Windows architecture and internals: Understand how the core system and management mechanisms workfrom the object manager to services to the registry Explore internal system data structures using tools like the kernel debugger Grasp the scheduler's priority and CPU placement algorithms Go inside the Windows security model to see how it authorizes access to data Understand how Windows manages physical and virtual memory Tour the Windows networking stack from top to bottomincluding APIs, protocol drivers, and network adapter drivers Troubleshoot file-system access problems and system boot problems Learn how to analyze crashes"
]
}
|
1012.3452
|
1507707327
|
Almost all of the current process scheduling algorithms which are used in modern operating systems (OS) have their roots in the classical scheduling paradigms which were developed during the 1970's. But modern computers have different types of software loads and user demands. We think it is important to run what the user wants at the current moment. A user can be a human, sitting in front of a desktop machine, or it can be another machine sending a request to a server through a network connection. We think that OS should become intelligent to distinguish between different processes and allocate resources, including CPU, to those processes which need them most. In this work, as a first step to make the OS aware of the current state of the system, we consider process dependencies and interprocess communications. We are developing a model, which considers the need to satisfy interactive users and other possible remote users or customers, by making scheduling decisions based on process dependencies and interprocess communications. Our simple proof of concept implementation and experiments show the effectiveness of this approach in the real world applications. Our implementation does not require any change in the software applications nor any special kind of configuration in the system, Moreover, it does not require any additional information about CPU needs of applications nor other resource requirements. Our experiments show significant performance improvement for real world applications. For example, almost constant average response time for Mysql data base server and constant frame rate for mplayer under different simulated load values.
|
@cite_8 use process display output production as a means of detecting interactive and multimedia applications. They schedule processes based on their display output production in a way that all processes have a chance to produce display output at the same rate. That might be usefull for multimedia applications where, for example, all video applications play at the same frame rate regardless of their window size. This approach only addresses desktop applications as any network user has no display access. Also, it might be possible that a compute intensive job creates a huge amount of disply output and receives an increase in its priority while it actually is not an interactive application.
|
{
"cite_N": [
"@cite_8"
],
"mid": [
"1968954610"
],
"abstract": [
"Desktop operating systems such as Windows and Linux base scheduling decisions on CPU consumption; processes that consume fewer CPU cycles are prioritized, assuming that interactive processes gain from this since they spend most of their time waiting for user input. However, this doesn't work for modern multimedia applications which require significant CPU resources. We therefore suggest a new metric to identify interactive processes by explicitly measuring interactions with the user, and we use it to design and implement a process scheduler. Measurements using a variety of applications indicate that this scheduler is very effective in distinguishing between competing interactive and noninteractive processes."
]
}
|
1012.3452
|
1507707327
|
Almost all of the current process scheduling algorithms which are used in modern operating systems (OS) have their roots in the classical scheduling paradigms which were developed during the 1970's. But modern computers have different types of software loads and user demands. We think it is important to run what the user wants at the current moment. A user can be a human, sitting in front of a desktop machine, or it can be another machine sending a request to a server through a network connection. We think that OS should become intelligent to distinguish between different processes and allocate resources, including CPU, to those processes which need them most. In this work, as a first step to make the OS aware of the current state of the system, we consider process dependencies and interprocess communications. We are developing a model, which considers the need to satisfy interactive users and other possible remote users or customers, by making scheduling decisions based on process dependencies and interprocess communications. Our simple proof of concept implementation and experiments show the effectiveness of this approach in the real world applications. Our implementation does not require any change in the software applications nor any special kind of configuration in the system, Moreover, it does not require any additional information about CPU needs of applications nor other resource requirements. Our experiments show significant performance improvement for real world applications. For example, almost constant average response time for Mysql data base server and constant frame rate for mplayer under different simulated load values.
|
Some researchers and OSs, allow real time or interactive processes to specify their CPU requirements and time constraints. For example in Mac OS X @cite_2 , a real time process may ask for a specific CPU requirement. (RedLine) @cite_17 use almost the same principles and treat interactive processes like real time processes. In RedLine processes can ask for a specific CPU and other resource requirements. RedLine also has an admission mechanism which may not allow the process to execute as an interactive process if the system does not have enough resources as requested by the process.
|
{
"cite_N": [
"@cite_17",
"@cite_2"
],
"mid": [
"1555299929",
"1751370186"
],
"abstract": [
"While modern workloads are increasingly interactive and resource-intensive (e.g., graphical user interfaces, browsers, and multimedia players), current operating systems have not kept up. These operating systems, which evolved from core designs that date to the 1970s and 1980s, provide good support for batch and command-line applications, but their ad hoc attempts to handle interactive workloads are poor. Their best-effort, priority-based schedulers provide no bounds on delays, and their resource managers (e.g., memory managers and disk I O schedulers) are mostly oblivious to response time requirements. Pressure on any one of these resources can significantly degrade application responsiveness. We present Redline, a system that brings first-class support for interactive applications to commodity operating systems. Redline works with unaltered applications and standard APIs. It uses lightweight specifications to orchestrate memory and disk I O management so that they serve the needs of interactive applications. Unlike realtime systems that treat specifications as strict requirements and thus pessimistically limit system utilization, Redline dynamically adapts to recent load, maximizing responsiveness and system utilization. We show that Redline delivers responsiveness to interactive applications even in the face of extreme workloads including fork bombs, memory bombs and bursty, large disk I O requests, reducing application pauses by up to two orders of magnitude.",
"Mac OS X was released in March 2001, but many components, such as Mach and BSD, are considerably older. Understanding the design, implementation, and workings of Mac OS X requires examination of several technologies that differ in their age, origins, philosophies, and roles.Mac OS X Internals: A Systems Approach is the first book that dissects the internals of the system, presenting a detailed picture that grows incrementally as you read. For example, you will learn the roles of the firmware, the bootloader, the Mach and BSD kernel components (including the process, virtual memory, IPC, and file system layers), the object-oriented I O Kit driver framework, user libraries, and other core pieces of software. You will learn how these pieces connect and work internally, where they originated, and how they evolved. The book also covers several key areas of the Intel-based Macintosh computers.A solid understanding of system internals is immensely useful in design, development, and debugging for programmers of various skill levels. System programmers can use the book as a reference and to construct a better picture of how the core system works. Application programmers can gain a deeper understanding of how their applications interact with the system. System administrators and power users can use the book to harness the power of the rich environment offered by Mac OS X. Finally, members of the Windows, Linux, BSD, and other Unix communities will find the book valuable in comparing and contrasting Mac OS X with their respective systems.Mac OS X Internals focuses on the technical aspects of OS X and is so full of extremely useful information and programming examples that it will definitely become a mandatory tool for every Mac OS X programmer."
]
}
|
1012.3452
|
1507707327
|
Almost all of the current process scheduling algorithms which are used in modern operating systems (OS) have their roots in the classical scheduling paradigms which were developed during the 1970's. But modern computers have different types of software loads and user demands. We think it is important to run what the user wants at the current moment. A user can be a human, sitting in front of a desktop machine, or it can be another machine sending a request to a server through a network connection. We think that OS should become intelligent to distinguish between different processes and allocate resources, including CPU, to those processes which need them most. In this work, as a first step to make the OS aware of the current state of the system, we consider process dependencies and interprocess communications. We are developing a model, which considers the need to satisfy interactive users and other possible remote users or customers, by making scheduling decisions based on process dependencies and interprocess communications. Our simple proof of concept implementation and experiments show the effectiveness of this approach in the real world applications. Our implementation does not require any change in the software applications nor any special kind of configuration in the system, Moreover, it does not require any additional information about CPU needs of applications nor other resource requirements. Our experiments show significant performance improvement for real world applications. For example, almost constant average response time for Mysql data base server and constant frame rate for mplayer under different simulated load values.
|
have an implementation called SWAP @cite_11 which recognizes process dependencies but it does not distinguish interactive or any other type of processes which might need increased priority. It only tracks process dependencies based on system calls and prevent a high priority process being blocked by a low priority process that has locked a resource needed by the high priority process (priority inversion problem).
|
{
"cite_N": [
"@cite_11"
],
"mid": [
"2159253693"
],
"abstract": [
"We have developed SWAP, a system that automatically detects process dependencies and accounts for such dependencies in scheduling. SWAP uses system call history to determine possible resource dependencies among processes in an automatic and fully transparent fashion. Because some dependencies cannot be precisely determined, SWAP associates confidence levels with dependency information that are dynamically adjusted using feedback from process blocking behavior. SWAP can schedule processes using this imprecise dependency information in a manner that is compatible with existing scheduling mechanisms and ensures that actual scheduling behavior corresponds to the desired scheduling policy in the presence of process dependencies. We have implemented SWAP in Linux and measured its effectiveness on microbenchmarks and real applications. Our results show that SWAP has low overhead, effectively solves the priority inversion problem and can provide substantial improvements in system performance in scheduling processes with dependencies."
]
}
|
1012.3452
|
1507707327
|
Almost all of the current process scheduling algorithms which are used in modern operating systems (OS) have their roots in the classical scheduling paradigms which were developed during the 1970's. But modern computers have different types of software loads and user demands. We think it is important to run what the user wants at the current moment. A user can be a human, sitting in front of a desktop machine, or it can be another machine sending a request to a server through a network connection. We think that OS should become intelligent to distinguish between different processes and allocate resources, including CPU, to those processes which need them most. In this work, as a first step to make the OS aware of the current state of the system, we consider process dependencies and interprocess communications. We are developing a model, which considers the need to satisfy interactive users and other possible remote users or customers, by making scheduling decisions based on process dependencies and interprocess communications. Our simple proof of concept implementation and experiments show the effectiveness of this approach in the real world applications. Our implementation does not require any change in the software applications nor any special kind of configuration in the system, Moreover, it does not require any additional information about CPU needs of applications nor other resource requirements. Our experiments show significant performance improvement for real world applications. For example, almost constant average response time for Mysql data base server and constant frame rate for mplayer under different simulated load values.
|
work called RSIO @cite_4 has the most similarities with our work. RSIO looks at process I O patterns as a way of detecting interactive processes. It also tries to identify other processes involved in a user activity and provide a scheduling policy to improve interactive performance. This policy is based on access patterns to I O devices. RSIO needs a configuration file that defines which I O devices should be monitored to detect interactive processes. It has also a relatively complicated heuristic mechanism to detect processes involved in a user interaction.
|
{
"cite_N": [
"@cite_4"
],
"mid": [
"2151801883"
],
"abstract": [
"We present RSIO, a processor scheduling framework for improving the response time of latency-sensitive applications by monitoring accesses to I O channels and inferring when user interactions occur. RSIO automatically identifies processes involved in a user interaction and boosts their priorities at the time the interaction occurs to improve system response time. RSIO also detects processes indirectly involved in processing an interaction, automatically accounting for dependencies and boosting their priorities accordingly. RSIO works with existing schedulers and requires no application modifications to identify periods of latency-sensitive application activity. We have implemented RSIO in Linux and measured its effectiveness on microbenchmarks and real applications. Our results show that RSIO is easy to use and can provide substantial improvements in system performance for latency-sensitive applications."
]
}
|
1012.3697
|
2171180835
|
The diameter k-clustering problem is the problem of partitioning a finite subset of ℝ d into k subsets called clusters such that the maximum diameter of the clusters is minimized. One early clustering algorithm that computes a hierarchy of approximate solutions to this problem (for all values of k) is the agglomerative clustering algorithm with the complete linkage strategy. For decades, this algorithm has been widely used by practitioners. However, it is not well studied theoretically. In this paper, we analyze the agglomerative complete linkage clustering algorithm. Assuming that the dimension d is a constant, we show that for any k the solution computed by this algorithm is an O(logk)-approximation to the diameter k-clustering problem. Our analysis does not only hold for the Euclidean distance but for any metric that is based on a norm. Furthermore, we analyze the closely related k-center and discrete k-center problem. For the corresponding agglomerative algorithms, we deduce an approximation factor of O(logk) as well.
|
In this paper, we study the agglomerative clustering algorithm using the complete linkage strategy to find a hierarchical clustering of @math points from @math . The running time is obviously polynomial in the description length of the input. Therefore, our only goal in this paper is to give an approximation guarantee for the diameter @math -clustering problem. The approximation guarantee is given by a factor @math such that the cost of the @math -clustering computed by the algorithm is at most @math times the cost of an optimal @math -clustering. Although the agglomerative complete linkage clustering algorithm is widely used, there are only few theoretical results considering the quality of the clustering computed by this algorithm. It is known that there exists a certain metric distance function such that this algorithm computes a @math -clustering with an approximation factor of @math @cite_4 . However, prior to the analysis we present in this paper, no non-trivial upper bound for the approximation guarantee of the classical complete linkage agglomerative clustering algorithm was known, and deriving such a bound has been discussed as one of the open problems in @cite_4 .
|
{
"cite_N": [
"@cite_4"
],
"mid": [
"2769245605"
],
"abstract": [
"We show that for any data set in any metric space, it is possible to construct a hierarchical clustering with the guarantee that for every k, the induced k-clustering has cost at most eight times that of the optimal k-clustering. Here the cost of a clustering is taken to be the maximum radius of its clusters. Our algorithm is similar in simplicity and efficiency to popular agglomerative heuristics for hierarchical clustering, and we show that these heuristics have unbounded approximation factors."
]
}
|
1012.3697
|
2171180835
|
The diameter k-clustering problem is the problem of partitioning a finite subset of ℝ d into k subsets called clusters such that the maximum diameter of the clusters is minimized. One early clustering algorithm that computes a hierarchy of approximate solutions to this problem (for all values of k) is the agglomerative clustering algorithm with the complete linkage strategy. For decades, this algorithm has been widely used by practitioners. However, it is not well studied theoretically. In this paper, we analyze the agglomerative complete linkage clustering algorithm. Assuming that the dimension d is a constant, we show that for any k the solution computed by this algorithm is an O(logk)-approximation to the diameter k-clustering problem. Our analysis does not only hold for the Euclidean distance but for any metric that is based on a norm. Furthermore, we analyze the closely related k-center and discrete k-center problem. For the corresponding agglomerative algorithms, we deduce an approximation factor of O(logk) as well.
|
For the Euclidean case, we know that for fixed @math , i.e. when we are not interested in a hierarchy of clusterings, the diameter @math -clustering problem and the @math -center problem are @math -hard. In fact, it is already @math -hard to approximate both problems with an approximation factor below @math and @math respectively @cite_11 .
|
{
"cite_N": [
"@cite_11"
],
"mid": [
"2133311553"
],
"abstract": [
"In a clustering problem, the aim is to partition a given set of n points in d -dimensional space into k groups, called clusters, so that points within each cluster are near each other. Two objective functions frequently used to measure the performance of a clustering algorithm are, for any L 4 metric, (a) the maximum distance between pairs of points in the same cluster, and (b) the maximum distance between points in each cluster and a chosen cluster center; we refer to either measure as the cluster size. We show that one cannot approximate the optimal cluster size for a fixed number of clusters within a factor close to 2 in polynomial time, for two or more dimensions, unless P=NP. We also present an algorithm that achieves this factor of 2 in time O ( n log k ), and show that this running time is optimal in the algebraic decision tree model. For a fixed cluster size, on the other hand, we give a polynomial time approximation scheme that estimates the optimal number of clusters under the second measure of cluster size within factors arbitrarily close to 1. Our approach is extended to provide approximation algorithms for the restricted centers, suppliers, and weighted suppliers problems that run in optimal O ( n log k ) time and achieve optimal or nearly optimal approximation bounds."
]
}
|
1012.3697
|
2171180835
|
The diameter k-clustering problem is the problem of partitioning a finite subset of ℝ d into k subsets called clusters such that the maximum diameter of the clusters is minimized. One early clustering algorithm that computes a hierarchy of approximate solutions to this problem (for all values of k) is the agglomerative clustering algorithm with the complete linkage strategy. For decades, this algorithm has been widely used by practitioners. However, it is not well studied theoretically. In this paper, we analyze the agglomerative complete linkage clustering algorithm. Assuming that the dimension d is a constant, we show that for any k the solution computed by this algorithm is an O(logk)-approximation to the diameter k-clustering problem. Our analysis does not only hold for the Euclidean distance but for any metric that is based on a norm. Furthermore, we analyze the closely related k-center and discrete k-center problem. For the corresponding agglomerative algorithms, we deduce an approximation factor of O(logk) as well.
|
Furthermore, there exist provably good approximation algorithms in this case. For the discrete @math -center problem, a simple @math -approximation algorithm is known for metric spaces @cite_9 , which immediately yields a @math -approximation algorithm for the diameter @math -clustering problem. For the @math -center problem, a variety of results is known. For example, for the Euclidean metric in @cite_8 a @math -approximation algorithm with running time @math is shown. This implies a @math -approximation algorithm with the same running time for the diameter @math -clustering problem.
|
{
"cite_N": [
"@cite_9",
"@cite_8"
],
"mid": [
"1973264045",
"2059651397"
],
"abstract": [
"The problem of clustering a set of points so as to minimize the maximum intercluster distance is studied. An O(kn) approximation algorithm, where n is the number of points and k is the number of clusters, that guarantees solutions with an objective function value within two times the optimal solution value is presented. This approximation algorithm succeeds as long as the set of points satisfies the triangular inequality. We also show that our approximation algorithm is best possible, with respect to the approximation bound, if PZ NP.",
"In this paper, we show that for several clustering problems one can extract a small set of points, so that using those core-sets enable us to perform approximate clustering efficiently. The surprising property of those core-sets is that their size is independent of the dimension.Using those, we present a (1+ e)-approximation algorithms for the k-center clustering and k-median clustering problems in Euclidean space. The running time of the new algorithms has linear or near linear dependency on the number of points and the dimension, and exponential dependency on 1 e and k. As such, our results are a substantial improvement over what was previously known.We also present some other clustering results including (1+ e)-approximate 1-cylinder clustering, and k-center clustering with outliers."
]
}
|
1012.3697
|
2171180835
|
The diameter k-clustering problem is the problem of partitioning a finite subset of ℝ d into k subsets called clusters such that the maximum diameter of the clusters is minimized. One early clustering algorithm that computes a hierarchy of approximate solutions to this problem (for all values of k) is the agglomerative clustering algorithm with the complete linkage strategy. For decades, this algorithm has been widely used by practitioners. However, it is not well studied theoretically. In this paper, we analyze the agglomerative complete linkage clustering algorithm. Assuming that the dimension d is a constant, we show that for any k the solution computed by this algorithm is an O(logk)-approximation to the diameter k-clustering problem. Our analysis does not only hold for the Euclidean distance but for any metric that is based on a norm. Furthermore, we analyze the closely related k-center and discrete k-center problem. For the corresponding agglomerative algorithms, we deduce an approximation factor of O(logk) as well.
|
Also, for metric spaces a hierarchical clustering strategy with an approximation guarantee of @math for the discrete @math -center problem is known @cite_4 . This implies an algorithm with an approximation guarantee of @math for the diameter @math -clustering problem.
|
{
"cite_N": [
"@cite_4"
],
"mid": [
"2769245605"
],
"abstract": [
"We show that for any data set in any metric space, it is possible to construct a hierarchical clustering with the guarantee that for every k, the induced k-clustering has cost at most eight times that of the optimal k-clustering. Here the cost of a clustering is taken to be the maximum radius of its clusters. Our algorithm is similar in simplicity and efficiency to popular agglomerative heuristics for hierarchical clustering, and we show that these heuristics have unbounded approximation factors."
]
}
|
1012.3697
|
2171180835
|
The diameter k-clustering problem is the problem of partitioning a finite subset of ℝ d into k subsets called clusters such that the maximum diameter of the clusters is minimized. One early clustering algorithm that computes a hierarchy of approximate solutions to this problem (for all values of k) is the agglomerative clustering algorithm with the complete linkage strategy. For decades, this algorithm has been widely used by practitioners. However, it is not well studied theoretically. In this paper, we analyze the agglomerative complete linkage clustering algorithm. Assuming that the dimension d is a constant, we show that for any k the solution computed by this algorithm is an O(logk)-approximation to the diameter k-clustering problem. Our analysis does not only hold for the Euclidean distance but for any metric that is based on a norm. Furthermore, we analyze the closely related k-center and discrete k-center problem. For the corresponding agglomerative algorithms, we deduce an approximation factor of O(logk) as well.
|
This paper as well as all of the above mentioned work is about static clustering, i.e. in the problem definition we are given the whole set of input points at once. An alternative model of the input data is to consider sequences of points that are given one after another. In @cite_15 , the authors discuss clustering in a so-called model. They give an algorithm with constant approximation factor that maintains a hierarchical clustering while new points are added to the input set. Furthermore, they show a lower bound of @math for the agglomerative complete linkage algorithm and the diameter @math -clustering problem. However, since their model differs from ours, their results have no bearing on our results.
|
{
"cite_N": [
"@cite_15"
],
"mid": [
"2016973429"
],
"abstract": [
"Motivated by applications such as document and image classification in information retrieval, we consider the problem of clustering dynamic point sets in a metric space. We propose a model called incremental clustering which is based on a careful analysis of the requirements of the information retrieval application, and which should also be useful in other applications. The goal is to efficiently maintain clusters of small diameter as new points are inserted. We analyze several natural greedy algorithms and demonstrate that they perform poorly. We propose new deterministic and randomized incremental clustering algorithms which have a provably good performance, and which we believe should also perform well in practice. We complement our positive results with lower bounds on the performance of incremental algorithms. Finally, we consider the dual clustering problem where the clusters are of fixed diameter, and the goal is to minimize the number of clusters."
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.