aid
stringlengths 9
15
| mid
stringlengths 7
10
| abstract
stringlengths 78
2.56k
| related_work
stringlengths 92
1.77k
| ref_abstract
dict |
---|---|---|---|---|
0911.1619
|
2097319163
|
If you recommend a product to me and I buy it, how much should you be paid by the seller? And if your sole interest is to maximize the amount paid to you by the seller for a sequence of recommendations, how should you recommend optimally if I become more inclined to ignore you with each irrelevant recommendation you make? Finding an answer to these questions is a key challenge in all forms of marketing that rely on and explore social ties; ranging from personal recommendations to viral marketing. In the first part of this paper, we show that there can be no pricing mechanism that is “truthful” with respect to the seller, and we use solution concepts from coalitional game theory, namely the Core, the Shapley Value, and the Nash Bargaining Solution, to derive provably “fair” prices for settings with one or multiple recommenders. We then investigate pricing mechanisms for the setting where recommenders have different “purchase arguments”. Here we show that it might be beneficial for the recommenders to withhold some of their arguments, unless anonymity-proof solution concepts, such as the anonymity-proof Shapley value, are used. In the second part of this paper, we analyze the setting where the recommendee loses trust in the recommender for each irrelevant recommendation. Here we prove that even if the recommendee regains her initial trust on each successful recommendation, the expected total profit the recommender can make over an infinite period is bounded. This can only be overcome when the recommendee also incrementally regains trust during periods without any recommendation. Here, we see an interesting connection to “banner blindness”, suggesting that showing fewer ads can lead to a higher long-term profit.
|
We are also addressing the topic of how rumors spread through social networks, or how to identify the best nodes to target for a viral marketing campaign @cite_0 @cite_28 . Our work focuses on a single atomic link in the corresponding cascades of conversions and, in the first part, we ask what a fair price should be to pay a node for activating one of her neighbors. In answering this question we limit our attention to the immediate profit of the seller due to the individual sale, and we do consider the additional value due to recommendation cascades caused by the newly activated node. However, given any algorithm to compute this higher order'' profit, it can trivially be incorporated into our results. The question whether a selfish node should actually try to activate her neighbors at all is addressed in .
|
{
"cite_N": [
"@cite_0",
"@cite_28"
],
"mid": [
"2042123098",
"2061820396"
],
"abstract": [
"One of the major applications of data mining is in helping companies determine which potential customers to market to. If the expected profit from a customer is greater than the cost of marketing to her, the marketing action for that customer is executed. So far, work in this area has considered only the intrinsic value of the customer (i.e, the expected profit from sales to her). We propose to model also the customer's network value: the expected profit from sales to other customers she may influence to buy, the customers those may influence, and so on recursively. Instead of viewing a market as a set of independent entities, we view it as a social network and model it as a Markov random field. We show the advantages of this approach using a social network mined from a collaborative filtering database. Marketing that exploits the network value of customers---also known as viral marketing---can be extremely effective, but is still a black art. Our work can be viewed as a step towards providing a more solid foundation for it, taking advantage of the availability of large relevant databases.",
"Models for the processes by which ideas and influence propagate through a social network have been studied in a number of domains, including the diffusion of medical and technological innovations, the sudden and widespread adoption of various strategies in game-theoretic settings, and the effects of \"word of mouth\" in the promotion of new products. Recently, motivated by the design of viral marketing strategies, Domingos and Richardson posed a fundamental algorithmic problem for such social network processes: if we can try to convince a subset of individuals to adopt a new product or innovation, and the goal is to trigger a large cascade of further adoptions, which set of individuals should we target?We consider this problem in several of the most widely studied models in social network analysis. The optimization problem of selecting the most influential nodes is NP-hard here, and we provide the first provable approximation guarantees for efficient algorithms. Using an analysis framework based on submodular functions, we show that a natural greedy strategy obtains a solution that is provably within 63 of optimal for several classes of models; our framework suggests a general approach for reasoning about the performance guarantees of algorithms for these types of influence problems in social networks.We also provide computational experiments on large collaboration networks, showing that in addition to their provable guarantees, our approximation algorithms significantly out-perform node-selection heuristics based on the well-studied notions of degree centrality and distance centrality from the field of social networks."
]
}
|
0911.1619
|
2097319163
|
If you recommend a product to me and I buy it, how much should you be paid by the seller? And if your sole interest is to maximize the amount paid to you by the seller for a sequence of recommendations, how should you recommend optimally if I become more inclined to ignore you with each irrelevant recommendation you make? Finding an answer to these questions is a key challenge in all forms of marketing that rely on and explore social ties; ranging from personal recommendations to viral marketing. In the first part of this paper, we show that there can be no pricing mechanism that is “truthful” with respect to the seller, and we use solution concepts from coalitional game theory, namely the Core, the Shapley Value, and the Nash Bargaining Solution, to derive provably “fair” prices for settings with one or multiple recommenders. We then investigate pricing mechanisms for the setting where recommenders have different “purchase arguments”. Here we show that it might be beneficial for the recommenders to withhold some of their arguments, unless anonymity-proof solution concepts, such as the anonymity-proof Shapley value, are used. In the second part of this paper, we analyze the setting where the recommendee loses trust in the recommender for each irrelevant recommendation. Here we prove that even if the recommendee regains her initial trust on each successful recommendation, the expected total profit the recommender can make over an infinite period is bounded. This can only be overcome when the recommendee also incrementally regains trust during periods without any recommendation. Here, we see an interesting connection to “banner blindness”, suggesting that showing fewer ads can lead to a higher long-term profit.
|
More generally, in the second part we look at a model where the recommendee loses trust in the recommender, i.e. for each unsuccessful recommendation she becomes less and less inclined to listen to any further suggestion. This is most likely to appear when the recommendee has the feeling that the recommendations are dishonest''. How honest recommendations can be ensured when there are several recommenders is studied in @cite_22 . The approach suggested by the authors involves evaluating ranking recommenders based on the rating given to their recommended items by other people. This motivates recommenders to give good recommendations in a similar way that Ebay's rating system gives incentives for both buyers and sellers to behave''. This approach, however, requires a public market where potential buyers can look for recommendations. This is not the setting of personal recommendations considered here.
|
{
"cite_N": [
"@cite_22"
],
"mid": [
"1972292875"
],
"abstract": [
"This paper presents HYRIWYG (How You Rate Influences What You Get), a reputation system applicable to Internet Recommendation Systems (RS). The novelty lies in the incentive mechanism that encourages evaluators to volunteer their true opinion. Honesty is encouraged because rewards are indexed by the quality of the RS's suggestions."
]
}
|
0911.1619
|
2097319163
|
If you recommend a product to me and I buy it, how much should you be paid by the seller? And if your sole interest is to maximize the amount paid to you by the seller for a sequence of recommendations, how should you recommend optimally if I become more inclined to ignore you with each irrelevant recommendation you make? Finding an answer to these questions is a key challenge in all forms of marketing that rely on and explore social ties; ranging from personal recommendations to viral marketing. In the first part of this paper, we show that there can be no pricing mechanism that is “truthful” with respect to the seller, and we use solution concepts from coalitional game theory, namely the Core, the Shapley Value, and the Nash Bargaining Solution, to derive provably “fair” prices for settings with one or multiple recommenders. We then investigate pricing mechanisms for the setting where recommenders have different “purchase arguments”. Here we show that it might be beneficial for the recommenders to withhold some of their arguments, unless anonymity-proof solution concepts, such as the anonymity-proof Shapley value, are used. In the second part of this paper, we analyze the setting where the recommendee loses trust in the recommender for each irrelevant recommendation. Here we prove that even if the recommendee regains her initial trust on each successful recommendation, the expected total profit the recommender can make over an infinite period is bounded. This can only be overcome when the recommendee also incrementally regains trust during periods without any recommendation. Here, we see an interesting connection to “banner blindness”, suggesting that showing fewer ads can lead to a higher long-term profit.
|
The problem of trust decay is related to banner blindness'' @cite_2 @cite_15 @cite_7 , where web users become blind'' to banner ads due to overexposure. Cast to this setting our mathematical model suggests that, even if web users' interest is refreshed'' by a single relevant advertisement that is clicked, the long term profit of advertisers will stagnate as click-through-rates fall to zero. The only possible way out of this dilemma is to showing banner ads for a while so that users can unlearn'' to ignore all advertising. This approach is also suggested in a recent patent @cite_35 .
|
{
"cite_N": [
"@cite_35",
"@cite_15",
"@cite_7",
"@cite_2"
],
"mid": [
"2198242797",
"2125045187",
"2015762759",
"147235900"
],
"abstract": [
"A solution is provided wherein an identification of a user who is producing low value to a web page or service is received, wherein the identification was determined by measuring web page usage patterns for the user. Advertising is then presented on the web page or service for the user according to a retraining program, wherein the retraining program is designed to retrain the user's behavior so that the user no longer produces low value and wherein the retraining program presents advertising in a different way than would be presented without the retraining program.",
"In this paper, we develop an analytical approach to modeling consumer response to banner ad exposures at a sponsored content Web site that reveals significant heterogeneity in (unobservable) click proneness across consumers. The effect of repeated exposures to banner ads is negative and nonlinear, and the differential effect of each successive ad exposure is initially negative, though nonlinear, and levels off at higher levels of passive ad exposures. Further, significant correlations between session and consumer click proneness and banner exposure sensitivity suggest gains from repeated banner exposures when consumers are less click prone. For a particular number of sessions, more clicks are generated from consumers who revisit over a longer period of time, than for those with the same number of sessions in a relatively shorter timeframe. We also find that consumers are equally likely to click on banner ads placed early or late in navigation path and that exposures have a positive cumulative effect in inducing click-through in future sessions. Our results have implications for online advertising response measurement and dynamic ad placement, and may help guide advertising media placement decisions.",
"The seeming contradiction between “banner blindness” and Web users' complaints about distracting advertisements motivates a pair of experiments into the effect of banner ads on visual search. Experiment 1 measures perceived cognitive workload and search times for short words with two banners on the screen. Four kinds of banners were examined: (1) animated commercial, (2) static commercial, (3) cyan with flashing text, and (4) blank. Using NASA's Task Load Index, participants report increased workload under flashing text banners. Experiment 2 investigates search through news headlines at two levels of difficulty: exact matches and matches requiring semantic interpretation. Results show both animated and static commercial banners decrease visual search speeds. Eye tracking data reveal people rarely look directly at banners. A post hoc memory test confirms low banner recall and, surprisingly, that animated banners are more difficult to remember than static look-alikes. Results have implications for cognitive modeling and Web design.",
""
]
}
|
0911.1619
|
2097319163
|
If you recommend a product to me and I buy it, how much should you be paid by the seller? And if your sole interest is to maximize the amount paid to you by the seller for a sequence of recommendations, how should you recommend optimally if I become more inclined to ignore you with each irrelevant recommendation you make? Finding an answer to these questions is a key challenge in all forms of marketing that rely on and explore social ties; ranging from personal recommendations to viral marketing. In the first part of this paper, we show that there can be no pricing mechanism that is “truthful” with respect to the seller, and we use solution concepts from coalitional game theory, namely the Core, the Shapley Value, and the Nash Bargaining Solution, to derive provably “fair” prices for settings with one or multiple recommenders. We then investigate pricing mechanisms for the setting where recommenders have different “purchase arguments”. Here we show that it might be beneficial for the recommenders to withhold some of their arguments, unless anonymity-proof solution concepts, such as the anonymity-proof Shapley value, are used. In the second part of this paper, we analyze the setting where the recommendee loses trust in the recommender for each irrelevant recommendation. Here we prove that even if the recommendee regains her initial trust on each successful recommendation, the expected total profit the recommender can make over an infinite period is bounded. This can only be overcome when the recommendee also incrementally regains trust during periods without any recommendation. Here, we see an interesting connection to “banner blindness”, suggesting that showing fewer ads can lead to a higher long-term profit.
|
In typical literature on sponsored search auctions @cite_1 @cite_16 it is assumed that the web search engine is optimizing its expected profit and that its expected profit for showing a particular ad is the ad's click-through-rate (CTR) multiplied by the price the advertiser will be charged when her ad gets clicked. Usually, only a single round is considered or, when there are budget constraints @cite_6 @cite_3 , the CTRs are assumed to be during the duration of the game. If, however, it is assumed that CTRs drop for ads for each unsuccessful advertisement shown then, in the long run, this puts more emphasis on showing ads with high CTR, regardless of how much their advertisers can be charged for a single click. Although different objective functions for the search engine have been considered @cite_6 , the setting of profit maximization with trust decay has not been studied and we deem this an interesting area for future work.
|
{
"cite_N": [
"@cite_3",
"@cite_16",
"@cite_1",
"@cite_6"
],
"mid": [
"2119914577",
"2088532256",
"1735712825",
"2086561196"
],
"abstract": [
"Internet search companies sell advertisement slots based on users' search queries via an auction. While there has been previous work onthe auction process and its game-theoretic aspects, most of it focuses on the Internet company. In this work, we focus on the advertisers, who must solve a complex optimization problem to decide how to place bids on keywords to maximize their return (the number of user clicks on their ads) for a given budget. We model the entire process and study this budget optimization problem. While most variants are NP-hard, we show, perhaps surprisingly, that simply randomizing between two uniform strategies that bid equally on all the keywordsworks well. More precisely, this strategy gets at least a 1-1 e fraction of the maximum clicks possible. As our preliminary experiments show, such uniform strategies are likely to be practical. We also present inapproximability results, and optimal algorithms for variants of the budget optimization problem.",
"Billions of dollars are spent each year on sponsored search, a form of advertising where merchants pay for placement alongside web search results. Slots for ad listings are allocated via an auction-style mechanism where the higher a merchant bids, the more likely his ad is to appear above other ads on the page. In this paper we analyze the incentive, efficiency, and revenue properties of two slot auction designs: \"rank by bid\" (RBB) and \"rank by revenue\" (RBR), which correspond to stylized versions of the mechanisms currently used by Yahoo! and Google, respectively. We also consider first- and second-price payment rules together with each of these allocation rules, as both have been used historically. We consider both the \"short-run\" incomplete information setting and the \"long-run\" complete information setting. With incomplete information, neither RBB nor RBR are truthful with either first or second pricing. We find that the informational requirements of RBB are much weaker than those of RBR, but that RBR is efficient whereas RBB is not. We also show that no revenue ranking of RBB and RBR is possible given an arbitrary distribution over bidder values and relevance. With complete information, we find that no equilibrium exists with first pricing using either RBB or RBR. We show that there typically exists a multitude of equilibria with second pricing, and we bound the divergence of (economic) value in such equilibria from the value obtained assuming all merchants bid truthfully.",
"",
"We discuss an auction framework in which sponsored search advertisements are delivered in response to queries. In practice, the presence of bidder budgets can have a significant impact on the ad delivery process. We propose an approach based on linear programming which takes bidder budgets into account, and uses them in conjunction with forecasting of query frequencies, and pricing and ranking schemes, to optimize ad delivery. Simulations show significant improvements in revenue and efficiency."
]
}
|
0911.1112
|
1510484544
|
The Web is ephemeral. Many resources have representations that change over time, and many of those representations are lost forever. A lucky few manage to reappear as archived resources that carry their own URIs. For example, some content management systems maintain version pages that reect a frozen prior state of their changing resources. Archives recurrently crawl the web to obtain the actual representation of resources, and subsequently make those available via special-purpose archived resources. In both cases, the archival copies have URIs that are protocolwise disconnected from the URI of the resource of which they represent a prior state. Indeed, the lack of temporal capabilities in the most common Web protocol, HTTP, prevents getting to an archived resource on the basis of the URI of its original. This turns accessing archived resources into a signicant discovery challenge for both human and software agents, which typically involves following a multitude of links from the original to the archival resource, or of searching archives for the original URI. This paper proposes the protocol-based Memento solution to address this problem, and describes a proof-of-concept experiment that includes major servers of archival content, including Wikipedia and the Internet Archive. The Memento solution is based on existing HTTP capabilities applied in a novel way to add the temporal dimension. The result is a framework in which archived resources can seamlessly be reached via the URI of their original: protocol-based time travel for the Web.
|
The goal of adding a temporal aspect to web navigation has been explored in projects that focus on user interface enhancement. The Zoetrope project @cite_27 provides a rich interface for querying and interacting with a set of archived versions of selected seed pages. The interface leverages a local archive that is assembled by frequently polling those seed pages. The Past Web Browser @cite_4 provides a simpler level of interaction with changing pages, but it is restricted to navigating existing web archives such as the Internet Archive. And DiffIE is a plug-in for Internet Explorer that emphasizes web content that changed since a user's previous visit by leveraging a dedicated client cache @cite_8 . None of these projects propose protocol enhancements but rather use ad-hoc techniques to achieve their goals. All could benefit from DT-conneg as a standard mechanism for accessing prior representations of resources.
|
{
"cite_N": [
"@cite_27",
"@cite_4",
"@cite_8"
],
"mid": [
"2114103606",
"1985731768",
"2141834868"
],
"abstract": [
"The Web is ephemeral. Pages change frequently, and it is nearly impossible to find data or follow a link after the underlying page evolves. We present Zoetrope, a system that enables interaction with the historicalWeb (pages, links, and embedded data) that would otherwise be lost to time. Using a number of novel interactions, the temporal Web can be manipulated, queried, and analyzed from the context of familar pages. Zoetrope is based on a set of operators for manipulating content streams. We describe these primitives and the associated indexing strategies for handling temporal Web data. They form the basis of Zoetrope and enable our construction of new temporal interactions and visualizations.",
"While the Internet community recognized early on the need to store and preserve past content of the Web for future use, the tools developed so far for retrieving information from Web archives are still difficult to use and far less efficient than those developed for the \"live Web.\" We expect that future information retrieval systems will utilize both the \"live\" and \"past Web\" and have thus developed a general framework for a past Web browser. A browser built using this framework would be a client-side system that downloads, in real time, past page versions from Web archives for their customized presentation. It would use passive browsing, change detection and change animation to provide a smooth and satisfactory browsing experience. We propose a meta-archive approach for increasing the coverage of past Web pages and for providing a unified interface to the past Web. Finally, we introduce query-based and localized approaches for filtered browsing that enhance and speed up browsing and information retrieval from Web archives.",
"The Web is a dynamic information environment. Web content changes regularly and people revisit Web pages frequently. But the tools used to access the Web, including browsers and search engines, do little to explicitly support these dynamics. In this paper we present DiffIE, a browser plug-in that makes content change explicit in a simple and lightweight manner. DiffIE caches the pages a person visits and highlights how those pages have changed when the person returns to them. We describe how we built a stable, reliable, and usable system, including how we created compact, privacy-preserving page representations to support fast difference detection. Via a longitudinal user study, we explore how DiffIE changed the way people dealt with changing content. We find that much of its benefit came not from exposing expected change, but rather from drawing attention to unexpected change and helping people build a richer understanding of the Web content they frequent."
]
}
|
0911.1767
|
1493066352
|
Bargaining networks model the behavior of a set of players that need to reach pairwise agreements for making profits. Nash bargaining solutions are special outcomes of such games that are both stable and balanced. Kleinberg and Tardos proved a sharp algorithmic characterization of such outcomes, but left open the problem of how the actual bargaining process converges to them. A partial answer was provided by who proposed a distributed algorithm for constructing Nash bargaining solutions, but without polynomial bounds on its convergence rate. In this paper, we introduce a simple and natural model for this process, and study its convergence rate to Nash bargaining solutions. At each time step, each player proposes a deal to each of her neighbors. The proposal consists of a share of the potential profit in case of agreement. The share is chosen to be balanced in Nash's sense as far as this is feasible (with respect to the current best alternatives for both players). We prove that, whenever the Nash bargaining solution is unique (and satisfies a positive gap condition) this dynamics converges to it in polynomial time. Our analysis is based on an approximate decoupling phenomenon between the dynamics on different substructures of the network. This approach may be of general interest for the analysis of local algorithms on networks.
|
Azar and co-authors @cite_6 first studied the question as to whether a balanced outcome can be produced by a local dynamics, and were able to answer positively. Their results left however two outstanding challenges: @math The bound on the convergence time proved in @cite_6 is exponential in the network size, and therefore does not provide a solid justification for convergence to NB solutions in large networks; @math The algorithm analyzed by these authors first selects a matching @math in @math @cite_10 , corresponding to the pairing of players that trade. In a second phase the algorithm determines the profit of each player. While such an algorithm can be implemented in a distributed way, point out that it is not entirely realistic. Indeed the rules of the dynamics change after the matching is found. Further, if the pairing is established at the outset, the players lose their bargaining power.
|
{
"cite_N": [
"@cite_10",
"@cite_6"
],
"mid": [
"2170987079",
"2096867163"
],
"abstract": [
"Max-product \"belief propagation\" (BP) is an iterative, message-passing algorithm for finding the maximum a posteriori (MAP) assignment of a discrete probability distribution specified by a graphical model. Despite the spectacular success of the algorithm in many application areas such as iterative decoding and combinatorial optimization, which involve graphs with many cycles, theoretical results about both the correctness and convergence of the algorithm are known in only a few cases (see section I for references). In this paper, we prove the correctness and convergence of max-product for finding the maximum weight matching (MWM) in bipartite graphs. Even though the underlying graph of the MWM problem has many cycles, somewhat surprisingly we show that the max-product algorithm converges to the correct MWM as long as the MWM is unique. We provide a bound on the number of iterations required and show that for a graph of size n, the computational cost of the algorithm scales as O(n3), which is the same as the computational cost of the best known algorithms for finding the MWM. We also provide an interesting relation between the dynamics of the max-product algorithm and the auction algorithm, which is a well-known distributed algorithm for solving the MWM problem.",
"Bargaining games on exchange networks have been studied by both economists and sociologists. A Balanced Outcome for such a game is an equilibrium concept that combines notions of stability and fairness. In a recent paper, Kleinberg and Tardos introduced balanced outcomes to the computer science community and provided a polynomial-time algorithm to compute the set of such outcomes. Their work left open a pertinent question: are there natural, local dynamics that converge quickly to a balanced outcome? In this paper, we provide a partial answer to this question by showing that simple edge-balancing dynamics converge to a balanced outcome whenever one exists."
]
}
|
0911.1875
|
2950228396
|
Given two rational maps @math and @math on @math of degree at least two, we study a symmetric, nonnegative-real-valued pairing @math which is closely related to the canonical height functions @math and @math associated to these maps. Our main results show a strong connection between the value of @math and the canonical heights of points which are small with respect to at least one of the two maps @math and @math . Several necessary and sufficient conditions are given for the vanishing of @math . We give an explicit upper bound on the difference between the canonical height @math and the standard height @math in terms of @math , where @math denotes the squaring map. The pairing @math is computed or approximated for several families of rational maps @math .
|
Baker-DeMarco have recently released a preprint @cite_14 which, among other things, shows that if @math are two rational maps (of degree at least two) defined over @math , then @math if and only if @math is infinite. This generalizes Mimar's result from maps defined over @math to those defined over @math . Yuan-Zhang have announced a generalization of this result to arbitrary polarized algebraic dynamical systems over @math .
|
{
"cite_N": [
"@cite_14"
],
"mid": [
"2952668330"
],
"abstract": [
"In this article, we combine complex-analytic and arithmetic tools to study the preperiodic points of one-dimensional complex dynamical systems. We show that for any fixed complex numbers a and b, and any integer d at least 2, the set of complex numbers c for which both a and b are preperiodic for z^d+c is infinite if and only if a^d = b^d. This provides an affirmative answer to a question of Zannier, which itself arose from questions of Masser concerning simultaneous torsion sections on families of elliptic curves. Using similar techniques, we prove that if two complex rational functions f and g have infinitely many preperiodic points in common, then they must have the same Julia set. This generalizes a theorem of Mimar, who established the same result assuming that f and g are defined over an algebraic extension of the rationals. The main arithmetic ingredient in the proofs is an adelic equidistribution theorem for preperiodic points over number fields and function fields, with non-archimedean Berkovich spaces playing an essential role."
]
}
|
0911.0136
|
1656912302
|
Context consistency checking, the checking of specified constraint on properties of contexts, is essential to context-aware applications. In order to delineate and adapt to dynamic changes in the pervasive computing environment, context-aware applications often need to specify and check behavioral consistency constraints over the contexts. This problem is challenging mainly due to the distributed and asynchronous nature of pervasive computing environments. Specifically, the critical issue in checking behavioral constraints is the temporal ordering of contextual activities. The contextual activities usually involve multiple context collecting devices, which are fully-decentralized and interact in an asynchronous manner. However, existing context consistency checking schemes do not work in asynchronous environments, since they implicitly assume the availability of a global clock or relay on synchronized interactions. To this end, we propose the Ordering Global Activity (OGA) algorithm, which detects the ordering of the global activities based on predicate detection in asynchronous environments. The essence of our approach is the message causality and its on-the-fly coding as logic vector clocks in asynchronous environments. We implement the Middleware Infrastructure for Predicate detection in Asynchronous environments (MIPA), over which the OGA algorithm is implemented and evaluated. The evaluation results show the impact of asynchrony on the checking of behavioral consistency constraints, which justifies the primary motivation of our work. They also show that OGA can achieve accurate checking of behavioral consistency constraints in dynamic pervasive computing environments.
|
Many existing studies on context-aware computing are concerned with middleware infrastructures that support collection and management of contexts @cite_31 @cite_27 @cite_25 @cite_17 @cite_11 @cite_35 @cite_15 . Various schemes have been proposed for context consistency checking over context-aware middleware infrastructures. In @cite_12 , consistency constraints were modeled by tuples, and consistency checking was based on comparison among elements in the tuples. In @cite_7 , consistency constraints were expressed in first-order logic, and an incremental consistency checking algorithm was proposed. In @cite_6 , a probabilistic approach is proposed to further improve the effectiveness of consistency checking. In @cite_21 @cite_37 , consistency constraints were expressed by assertions. However, existing schemes do not sufficiently consider the temporal relationships among the contexts. It is implicitly assumed that the contexts being checked belong to the same snapshot of time. Such limitations make these schemes do not work in asynchronous pervasive computing environments @cite_16 @cite_3 @cite_29 .
|
{
"cite_N": [
"@cite_35",
"@cite_37",
"@cite_7",
"@cite_15",
"@cite_29",
"@cite_21",
"@cite_17",
"@cite_6",
"@cite_3",
"@cite_27",
"@cite_31",
"@cite_16",
"@cite_25",
"@cite_12",
"@cite_11"
],
"mid": [
"",
"",
"2104301421",
"",
"",
"2121476983",
"",
"2167404850",
"",
"",
"2069946274",
"2051712055",
"",
"2104504797",
""
],
"abstract": [
"",
"",
"Applications in pervasive computing are typically required to interact seamlessly with their changing environments. To provide users with smart computational services, these applications must be aware of incessant context changes in their environments and adjust their behaviors accordingly. As these environments are highly dynamic and noisy, context changes thus acquired could be obsolete, corrupted or inaccurate. This gives rise to the problem of context inconsistency, which must be timely detected in order to prevent applications from behaving anomalously. In this paper, we propose a formal model of incremental consistency checking for pervasive contexts. Based on this model, we further propose an efficient checking algorithm to detect inconsistent contexts. The performance of the algorithm and its advantages over conventional checking techniques are evaluated experimentally using Cabot middleware.",
"",
"",
"Context-awareness plays a key role in a paradigm shift from traditional desktop styled computing to emerging pervasive computing. Many context-aware systems have been built to achieve the vision of pervasive computing and alleviate the human attention bottleneck; however, these systems are far from real world applications. Quality of context is critical in reducing the gap between existing systems and real-life applications. Aiming to provide the support of quality of context, in this paper, we propose a novel quality model for context information and a context management mechanism for inconsistency resolution. We also build a prototype system to validate our proposed model and mechanism, and to assist the development of context-aware applications. Through our evaluations and case study, context-aware applications can be built with the support of quality of context",
"",
"Context-awareness is a key issue in pervasive computing. Context-aware applications are prone to the context consistency problem, where applications are confronted with conflicting contexts and cannot decide how to adapt themselves. In pervasive computing environments, users are often willing to accept certain degree of context inconsistency, as long as it can reduce the consistency maintenance cost, e.g., query delay and battery power. However, existing consistency maintenance schemes do not enable the users to make such tradeoffs. To this end, we propose the probabilistic consistency checking for pervasive context (PCCPC) algorithm. Detailed performance analysis shows that PCCPC enables the users to check consistency over arbitrarily specified ratio of context. We also conduct experiments to study the cost reduced by probabilistic checking. The analytical and the experimental results show that PCCPC enables the users to efficiently make tradeoffs between context consistency and the associated checking cost.",
"",
"",
"LIME (Linda in a mobile environment) is a model and middleware supporting the development of applications that exhibit the physical mobility of hosts, logical mobility of agents, or both. LIME adopts a coordination perspective inspired by work on the Linda model. The context for computation, represented in Linda by a globally accessible persistent tuple space, is refined in LIME to transient sharing of the identically named tuple spaces carried by individual mobile units. Tuple spaces are also extended with a notion of location and programs are given the ability to react to specified states. The resulting model provides a minimalist set of abstractions that facilitates the rapid and dependable development of mobile applications. In this article we illustrate the model underlying LIME, provide a formal semantic characterization for the operations it makes available to the application developer, present its current design and implementation, and discuss lessons learned in developing applications that involve physical mobility.",
"Applications running on mobile devices are heavily context-aware and adaptive, leading to new analysis and testing challenges as streams of context values drive these applications to undesired configurations that are not easily exposed by existing validation techniques. We address this challenge by employing a finite-state model of adaptive behavior to enable the detection of faults caused by (1) erroneous adaptation logic, and (2) asynchronous updating of context information, which leads to inconsistencies between the external physical context and its internal representation within an application. We identify a number of adaptation fault patterns, each describing a class of faulty behaviors that we detect automatically by analyzing the system's adaptation model. We illustrate our approach on a simple but realistic application in which a cellphone's configuration profile is changed automatically based on the user's location, speed and surrounding environment.",
"",
"Context-awareness is a key feature of pervasive computing whose environments keep evolving. The support of context-awareness requires comprehensive management including detection and resolution of context inconsistency, which occurs naturally in pervasive computing. In this paper we present a framework for realizing dynamic context consistency management. The framework supports inconsistency detection based on a semantic matching and inconsistency triggering model, and inconsistency resolution with proactive actions to context sources. We further present an implementation based on the Cabot middleware. The feasibility of the framework and its performance are evaluated through a case study and a simulated experiment, respectively.",
""
]
}
|
0911.0136
|
1656912302
|
Context consistency checking, the checking of specified constraint on properties of contexts, is essential to context-aware applications. In order to delineate and adapt to dynamic changes in the pervasive computing environment, context-aware applications often need to specify and check behavioral consistency constraints over the contexts. This problem is challenging mainly due to the distributed and asynchronous nature of pervasive computing environments. Specifically, the critical issue in checking behavioral constraints is the temporal ordering of contextual activities. The contextual activities usually involve multiple context collecting devices, which are fully-decentralized and interact in an asynchronous manner. However, existing context consistency checking schemes do not work in asynchronous environments, since they implicitly assume the availability of a global clock or relay on synchronized interactions. To this end, we propose the Ordering Global Activity (OGA) algorithm, which detects the ordering of the global activities based on predicate detection in asynchronous environments. The essence of our approach is the message causality and its on-the-fly coding as logic vector clocks in asynchronous environments. We implement the Middleware Infrastructure for Predicate detection in Asynchronous environments (MIPA), over which the OGA algorithm is implemented and evaluated. The evaluation results show the impact of asynchrony on the checking of behavioral consistency constraints, which justifies the primary motivation of our work. They also show that OGA can achieve accurate checking of behavioral consistency constraints in dynamic pervasive computing environments.
|
In asynchronous environments, the concept of temporal ordering of events must be carefully reexamined @cite_32 . The happen-before relationship intrinsic in message passing is a promising solution to context consistency checking in asynchronous pervasive computing environments. In our previous work @cite_20 , the Concurrent Event Detection for Asynchronous consistency checking (CEDA) algorithm was proposed to detect concurrent contextual activities in asynchronous pervasive computing environments. CEDA explicitly checks whether contexts being checked belong to the same snapshot of time based on the happen-before relationship among the beginning and ending of contextual activities. However, behavior patterns of contexts cannot be specified and checked by CEDA. In this paper, we study how to check behavioral patterns of contexts based on the ordering of global contextual activities.
|
{
"cite_N": [
"@cite_32",
"@cite_20"
],
"mid": [
"1973501242",
"2141246182"
],
"abstract": [
"The concept of one event happening before another in a distributed system is examined, and is shown to define a partial ordering of the events. A distributed algorithm is given for synchronizing a system of logical clocks which can be used to totally order the events. The use of the total ordering is illustrated with a method for solving synchronization problems. The algorithm is then specialized for synchronizing physical clocks, and a bound is derived on how far out of synchrony the clocks can become.",
"Contexts, the pieces of information that capture the characteristics of computing environments, are often inconsistent in the dynamic and uncertain pervasive computing environments. Various schemes have been proposed to check context consistency for pervasive applications. However, existing schemes implicitly assume that the contexts being checked belong to the same snapshot of time. This limitation makes existing schemes do not work in pervasive computing environments, which are characterized by the asynchronous coordination among computing devices. The main challenge imposed on context consistency checking by asynchronous environments is how to interpret and detect concurrent events. To this end, we propose in this paper the Concurrent Events Detection for Asynchronous consistency checking (CEDA) algorithm. An analytical model, together with corresponding numerical results, is derived to study the performance of CEDA. We also conduct extensive experimental evaluation to investigate whether CEDA is desirable for context-aware applications. Both theoretical analysis and experimental evaluation show that CEDA accurately detects concurrent events in time in asynchronous pervasive computing environments, even with dynamic changes in message delay, duration of events and error rate of context collection."
]
}
|
0911.0050
|
1750854148
|
We present a method to analyse the scientific contributions between research groups. Given multiple research groups, we construct their journal proceeding graphs and then compute the similarity gap between them using network analysis. This analysis can be used for measuring similarity gap of the topics qualities between research groups' scientific contributions. We demonstrate the practicality of our method by comparing the scientific contributions by Korean researchers with those by the global researchers for information security in 2006 - 2008. The empirical analysis shows that the current security research in South Korea has been isolated from the global research trend.
|
The use of statistical bibliometric indicators in research evaluation emerged in the 1960s and 1970s @cite_11 , and is in wide use today due to the development of the relevant databases. These indicators provide useful output measures of activity and performance in scientific research and have become standard tools for research evaluation @cite_37 . However, some methodological problems of research evaluation at the micro level (e.g. the scientific contribution of a small research group) still remain unresolved @cite_8 @cite_16 . @cite_0 issued the problems of the bibliometric indicators for computer science in detail.
|
{
"cite_N": [
"@cite_37",
"@cite_8",
"@cite_0",
"@cite_16",
"@cite_11"
],
"mid": [
"2028599680",
"2043465687",
"2038745769",
"",
"2184347607"
],
"abstract": [
"In this communication we perform an analysis of European science, investigating the way countries are joined in clusters according to their similarity. An extremely clear pattern arises, suggesting that geographical and cultural factors strongly influence the scientific fabric of these countries. Although it is seen that one of the major factors behind Science in Europe is, apparently, geographical proximity, bilateral cooperation between countries cannot fully account for the respective similarity. Long-term policies, planning and investment are also visible in the results.",
"This paper examines a number of the criticisms that citation analysis has been subjected to over the years. It is argued that many of these criticisms have been based on only limited examinations of data in particular contexts and it remains unclear how broadly applicable these problems are to research conducted at different levels of analysis, in specific field, and among various national data sets. Relevant evidence is provided from analysis of Australian and international data. Citation analysis is likely to be most reliable when data is aggregated and at the highly-cited end of the distribution. It is possible to make valid inferences about individual cases, although considerable caution should be used. Bibliometric measures should be viewed as a useful supplement to other research evaluation measures rather than as a replacement.",
"Reassessing the assessment criteria and techniques traditionally used in evaluating computer science research effectiveness.",
"",
"Research evaluation is based on a representation of the research. Improving the quality of the representations cannot prevent the indicators from being provided with meaning by a receiving discourse different from the research system(s) under study. Since policy decisions affect the systems under study, this configuration generates a tension that has been driving the further development of science indicators since World War II. The article discusses historically the emergence of science indicators and some of the methodological problems involved. More recent developments have been induced by the emergence of the European Union as a supra-national level of policy coordination and by the Internet as a global medium of communication. As science, technology, and innovation policies develop increasingly at various levels and with different objectives, the evaluative discourses can be expected to differentiate with reference to the discourses in which they are enrolled."
]
}
|
0911.0050
|
1750854148
|
We present a method to analyse the scientific contributions between research groups. Given multiple research groups, we construct their journal proceeding graphs and then compute the similarity gap between them using network analysis. This analysis can be used for measuring similarity gap of the topics qualities between research groups' scientific contributions. We demonstrate the practicality of our method by comparing the scientific contributions by Korean researchers with those by the global researchers for information security in 2006 - 2008. The empirical analysis shows that the current security research in South Korea has been isolated from the global research trend.
|
An alternative approach is to analyse researchers' social networks such as co-citation networks @cite_2 @cite_24 @cite_27 @cite_5 and co-authorship networks @cite_28 @cite_22 @cite_39 @cite_33 . Citation networks can be also used to evaluate the importance of journals proceedings by computing centrality values of the nodes in a citation graph. Co-authorship networks are an important class of social networks and have been used extensively. Many co-authorship networks have been studied to investigate the patterns, motivation, and the structure of scientific collaboration @cite_18 @cite_34 @cite_32 @cite_20 @cite_12 @cite_7 @cite_10 @cite_3 . Morris @cite_31 proposed a model to monitor the birth and development of a scientific speciality with a collection of journal papers. Lee @cite_35 practically analysed the research trends in the information security field using co-word analysis''. Our work is to extend these to measure the similarity gap between research groups by comparing their publication outputs.
|
{
"cite_N": [
"@cite_35",
"@cite_18",
"@cite_22",
"@cite_33",
"@cite_7",
"@cite_28",
"@cite_32",
"@cite_3",
"@cite_39",
"@cite_24",
"@cite_27",
"@cite_2",
"@cite_5",
"@cite_31",
"@cite_34",
"@cite_10",
"@cite_12",
"@cite_20"
],
"mid": [
"2042463707",
"1967880836",
"",
"",
"",
"2025441910",
"",
"",
"",
"",
"",
"1974087593",
"",
"2094595236",
"",
"",
"",
""
],
"abstract": [
"In the highly competitive world, there has been a concomitant increase in the need for the research and planning methodology, which can perform an advanced assessment of technological opportunities and an early perception of threats and possibilities of the emerging technology according to the nation’s economic and social status.",
"Abstract Patrick Ion (Mathematical Reviews) and Jerry Grossman (Oakland University) maintain a collection of data on Paul Erdos, his co-authors and their co-authors. These data can be represented by a graph, also called the Erdos collaboration graph. In this paper, some techniques for analysis of large networks (different approaches to identify ‘interesting’ individuals and groups, analysis of internal structure of the main core using pre-specified blockmodeling and hierarchical clustering) and visualizations of their parts, are presented on the case of Erdos collaboration graph, using the program Pajek .",
"",
"",
"",
"In this paper, we describe the evolution and impact of computer-supported cooperative work (CSCW) research through social network analysis of coauthorship data. A network of authors as nodes and shared papers as links is used to compare patterns of growth and collaboration in CSCW with other domains, such as high-energy physics and computer science. Further, the coauthorship network data are used to depict dynamic changes in the structure of CSCW collaborations over time. Examination of these changes shows high volatility in the composition of the CSCW research community over decade-long time spans. These data are augmented by a brief citation analysis of recent CSCW conferences. We discuss the implications of the CSCW findings in terms of the influence of CSCW research on the larger field of HCI research as well as the general utility of social network analysis for understanding patterns of collaboration.",
"",
"",
"",
"",
"",
"This paper presents the analysis and modelling of two data sets associated with the literature of hypertext as represented by the ACM Hypertext conference series. This work explores new ways of organising and accessing the vast amount of interrelated information. The first data set, including all the full papers published in this series (1987 1998), is structured and visualised as a semantic space. This semantic space provides an access point for each paper in this collection. The second data set, containing author co-citation counts based on nine conferences in the series (1989 1998), is analysed and mapped in its entirety and in three evenly distributed sub-periods. Specialties major research fronts in the field of hypertext are identified based on the results of a factor analysis and corresponding author co-citation maps. The names of authors in these maps are linked to the bibliographical and citation summaries of these authors on the WWW.",
"",
"A model is presented of the manifestation of the birth and development of a scientific specialty in a collection of journal papers. The proposed model, Cumulative Advantage by Paper with Exemplars (CAPE) is an adaptation of Price's cumulative advantage model (D. Price, [1976]. Two modifications are made: (a) references are cited in groups by paper, and (b) the model accounts for the generation of highly cited exemplar references immediately after the birth of the specialty. This simple growth process mimics many characteristic features of real collections of papers, including the structure of the paper-to-reference matrix, the reference-per-paper distribution, the paper-per-reference distribution, the bibliographic coupling distribution, the cocitation distribution, the bibliographic coupling clustering coefficient distribution, and the temporal distribution of exemplar references. The model yields a great deal of insight into the process that produces the connectedness and clustering of a collection of articles and references. Two examples are presented and successfully modeled: a collection of 131 articles on MEMS RF (microelectromechnical systems radio frequency) switches, and a collection of 901 articles on the subject of complex networks. © 2005 Wiley Periodicals, Inc.",
"",
"",
"",
""
]
}
|
0911.0517
|
2950623060
|
We prove a quantitative version of the Gibbard-Satterthwaite theorem. We show that a uniformly chosen voter profile for a neutral social choice function f of @math alternatives and n voters will be manipulable with probability at least @math , where @math is the minimal statistical distance between f and the family of dictator functions. Our results extend those of FrKaNi:08, which were obtained for the case of 3 alternatives, and imply that the approach of masking manipulations behind computational hardness (as considered in BarthOrline:91, ConitzerS03b, ElkindL05, ProcacciaR06 and ConitzerS06) cannot hide manipulations completely. Our proof is geometric. More specifically it extends the method of canonical paths to show that the measure of the profiles that lie on the interface of 3 or more outcomes is large. To the best of our knowledge our result is the first isoperimetric result to establish interface of more than two bodies.
|
Corollary and Corollary , which extend the result of @cite_3 to the case of @math or more alternatives, are thus more relevant with respect to the hardness of finding a manipulation. They imply that in the case were votes are cast uniformly at random, a random change of preference for a random voter will yield a beneficial manipulation with non-negligible probability--at most polynomially small in @math and @math by Corollary . Thus in the setup of @cite_17 @cite_16 @cite_1 @cite_2 @cite_4 , with positive probability, a single voter with black-box access to @math can efficiently manipulate. This implies that approach of masking manipulations behind computational hardness cannot hide manipulations completely.
|
{
"cite_N": [
"@cite_4",
"@cite_1",
"@cite_3",
"@cite_2",
"@cite_16",
"@cite_17"
],
"mid": [
"2055053954",
"1580922218",
"2070986136",
"1493942848",
"2950164662",
""
],
"abstract": [
"Explores, for several classes of social choice rules, the distribution of the number of profiles at which a rule can be strategically manipulated. In this paper, we will do comparative social choice, looking for information about how social choice rules compare in their vulnerability to strategic misrepresentation of preferences.",
"This paper addresses the problem of constructing voting protocols that are hard to manipulate. We describe a general technique for obtaining a new protocol by combining two or more base protocols, and study the resulting class of (vote-once) hybrid voting protocols, which also includes most previously known manipulation-resistant protocols. We show that for many choices of underlying base protocols, including some that are easily manipulable, their hybrids are NP-hard to manipulate, and demonstrate that this method can be used to produce manipulation-resistant protocols with unique combinations of useful features.",
"The Gibbard-Satterthwaite theorem states that every non-trivial voting method among at least 3 alternatives can be strategically manipulated. We prove a quantitative version of the Gibbard-Satterthwaite theorem: a random manipulation by a single random voter will succeed with non-negligible probability for every neutral voting method among 3 alternatives that is far from being a dictatorship.",
"Encouraging voters to truthfully reveal their preferences in an election has long been an important issue. Recently, computational complexity has been suggested as a means of precluding strategic behavior. Previous studies have shown that some voting protocols are hard to manipulate, but used NP-hardness as the complexity measure. Such a worst-case analysis may be an insufficient guarantee of resistance to manipulation. Indeed, we demonstrate that NP-hard manipulations may be tractable in the average-case. For this purpose, we augment the existing theory of average-case complexity with some new concepts. In particular, we consider elections distributed with respect to junta distributions, which concentrate on hard instances. We use our techniques to prove that scoring protocols are susceptible to manipulation by coalitions, when the number of candidates is constant.",
"Voting is a general method for preference aggregation in multiagent settings, but seminal results have shown that all (nondictatorial) voting protocols are manipulable. One could try to avoid manipulation by using voting protocols where determining a beneficial manipulation is hard computationally. A number of recent papers study the complexity of manipulating existing protocols. This paper is the first work to take the next step of designing new protocols that are especially hard to manipulate. Rather than designing these new protocols from scratch, we instead show how to tweak existing protocols to make manipulation hard, while leaving much of the original nature of the protocol intact. The tweak studied consists of adding one elimination preround to the election. Surprisingly, this extremely simple and universal tweak makes typical protocols hard to manipulate! The protocols become NP-hard, #P-hard, or PSPACE-hard to manipulate, depending on whether the schedule of the preround is determined before the votes are collected, after the votes are collected, or the scheduling and the vote collecting are interleaved, respectively. We prove general sufficient conditions on the protocols for this tweak to introduce the hardness, and show that the most common voting protocols satisfy those conditions. These are the first results in voting settings where manipulation is in a higher complexity class than NP (presuming PSPACE @math NP).",
""
]
}
|
0911.0517
|
2950623060
|
We prove a quantitative version of the Gibbard-Satterthwaite theorem. We show that a uniformly chosen voter profile for a neutral social choice function f of @math alternatives and n voters will be manipulable with probability at least @math , where @math is the minimal statistical distance between f and the family of dictator functions. Our results extend those of FrKaNi:08, which were obtained for the case of 3 alternatives, and imply that the approach of masking manipulations behind computational hardness (as considered in BarthOrline:91, ConitzerS03b, ElkindL05, ProcacciaR06 and ConitzerS06) cannot hide manipulations completely. Our proof is geometric. More specifically it extends the method of canonical paths to show that the measure of the profiles that lie on the interface of 3 or more outcomes is large. To the best of our knowledge our result is the first isoperimetric result to establish interface of more than two bodies.
|
We further note that Dobzinski and Procaccia @cite_14 established an analogous result for the case of two voters and any number of candidates, under a comparably weak assumption on the voting rule.
|
{
"cite_N": [
"@cite_14"
],
"mid": [
"1859798081"
],
"abstract": [
"The recent result of Friedgut, Kalai and Nisan [9] gives a quantitative version of the Gibbard-Satterthwaite Theorem regarding manipulation in elections, but holds only for neutral social choice functions and three alternatives. We complement their theorem by proving a similar result regarding Pareto-Optimal social choice functions when the number of voters is two. We discuss the implications of our results with respect to the agenda of precluding manipulation in elections by means of computational hardness."
]
}
|
0911.0517
|
2950623060
|
We prove a quantitative version of the Gibbard-Satterthwaite theorem. We show that a uniformly chosen voter profile for a neutral social choice function f of @math alternatives and n voters will be manipulable with probability at least @math , where @math is the minimal statistical distance between f and the family of dictator functions. Our results extend those of FrKaNi:08, which were obtained for the case of 3 alternatives, and imply that the approach of masking manipulations behind computational hardness (as considered in BarthOrline:91, ConitzerS03b, ElkindL05, ProcacciaR06 and ConitzerS06) cannot hide manipulations completely. Our proof is geometric. More specifically it extends the method of canonical paths to show that the measure of the profiles that lie on the interface of 3 or more outcomes is large. To the best of our knowledge our result is the first isoperimetric result to establish interface of more than two bodies.
|
Theorem considers the case of @math and more alternatives, compared to @math alternatives considered in @cite_3 . The two results are, however, difficult to compare: the result of @cite_3 counts the number of manipulation pairs @math , where @math is a manipulation point, and @math is the voting vector obtained from @math after one of the voters changed her vote to gain a more favorable outcome, while our result is stated in terms of the number of manipulations alone. Our proof actually shows a lower bound XXXIX
|
{
"cite_N": [
"@cite_3"
],
"mid": [
"2070986136"
],
"abstract": [
"The Gibbard-Satterthwaite theorem states that every non-trivial voting method among at least 3 alternatives can be strategically manipulated. We prove a quantitative version of the Gibbard-Satterthwaite theorem: a random manipulation by a single random voter will succeed with non-negligible probability for every neutral voting method among 3 alternatives that is far from being a dictatorship."
]
}
|
0910.5399
|
2013719343
|
Scott's graph model is a lambda-algebra based on the observation that continuous endofunctions on the lattice of sets of natural numbers can be represented via their graphs. A graph is a relation mapping finite sets of input values to output values. We consider a similar model based on relations whose input values are finite sequences rather than sets. This alteration means that we are taking into account the order in which observations are made. This new notion of graph gives rise to a model of affine lambda-calculus that admits an interpretation of imperative constructs including variable assignment, dereferencing and allocation. Extending this untyped model, we construct a category that provides a model of typed higher-order imperative computation with an affine type system. An appropriate language of this kind is Reynolds's Syntactic Control of Interference. Our model turns out to be fully abstract for this language. At a concrete level, it is the same as Reddy's object spaces model, which was the first "state-free" model of a higher-order imperative programming language and an important precursor of games models. The graph model can therefore be seen as a universal domain for Reddy's model.
|
The utility of a universal type for establishing properties of a model is well-known, and was explained in detail by Longley @cite_8 . The central idea of this paper, of modifying Scott's graph model to record slightly different information, has also been used by Longley in @cite_22 to obtain a model of fresh name generation. A similar model construction has been investigated by @cite_15 . We shall remark further on the connections between these papers and our present work below, although we leave closer investigation for future work.
|
{
"cite_N": [
"@cite_15",
"@cite_22",
"@cite_8"
],
"mid": [
"2087185080",
"",
"182780044"
],
"abstract": [
"We give a category-theoretic formulation of Engeler-style models for the untyped λ-calculus. In order to do so, we exhibit an equivalence between distributive laws and extensions of one monad to the Kleisli category of another and explore the example of an arbitrary commutative monad together with the monad for commutative monoids. On Set as base category, the latter is the finite multiset monad. We exploit the self-duality of the category Rel, i.e., the Kleisli category for the powerset monad, and the category theoretic structures on it that allow us to build models of the untyped λ-calculus, yielding a variant of the Engeler model. We replace the monad for commutative monoids by that for idempotent commutative monoids, which, on Set, is the finite powerset monad. This does not quite yield a distributive law, so requires a little more subtlety, but, subject to that subtlety, it yields exactly the original Engeler construction.",
"",
"We discuss the standard notions of universal object and universal type, and illustrate the usefulness of these concepts via several examples from denotational semantics. The purpose of the paper is to provide a gentle introduction to these notions, and to advocate a particular point of view which makes significant use of them. The main ideas here are not new, though our expository slant is somewhat novel, and some of our examples lead to seemingly new results."
]
}
|
0910.5046
|
1644334023
|
A new style of temporal debugging is proposed. The new URDB debugger can employ such techniques as temporal search for finding an underlying fault that is causing a bug. This improves on the standard iterative debugging style, which iteratively re-executes a program under debugger control in the search for the underlying fault. URDB acts as a meta-debugger, with current support for four widely used debuggers: gdb, MATLAB, python, and perl. Support for a new debugger can be added in a few hours. Among its points of novelty are: (i) the first reversible debuggers for MATLAB, python, and perl; (ii) support for today's multi-core architectures; (iii) reversible debugging of multi-process and distributed computations; and (iv) temporal search on changes in program expressions. URDB gains its reversibility and temporal abilities through the fast checkpoint-restart capability of DMTCP (Distributed MultiThreaded CheckPointing). The recently enhanced DMTCP also adds ptrace support, enabling one to freeze, migrate, and replicate debugging sessions.
|
Although not implemented in this current work, there are potential approaches to orthogonally add determinism to URDB while running on a multi-core architecture. Two examples of adding determinism to multi-core architectures are Kendo @cite_8 and DMP @cite_13 . A method for adding only partial determinism is described for PRES @cite_0 . The PRES technique of using Feedback generation from previous replay attempts is especially interesting for its synergy with the URDB reversible debugger, since a generalization of URDB would allow URDB to run PRES on an application, while giving PRES program control with which to direct URDB when to create checkpoints, and when to repeatedly re-execute from a given checkpoint. Finally, logging of I O and certain other events can also be added through wrappers around system calls.
|
{
"cite_N": [
"@cite_0",
"@cite_13",
"@cite_8"
],
"mid": [
"2130473288",
"2134440791",
"2122532513"
],
"abstract": [
"Bug reproduction is critically important for diagnosing a production-run failure. Unfortunately, reproducing a concurrency bug on multi-processors (e.g., multi-core) is challenging. Previous techniques either incur large overhead or require new non-trivial hardware extensions. This paper proposes a novel technique called PRES (probabilistic replay via execution sketching) to help reproduce concurrency bugs on multi-processors. It relaxes the past (perhaps idealistic) objective of \"reproducing the bug on the first replay attempt\" to significantly lower production-run recording overhead. This is achieved by (1) recording only partial execution information (referred to as \"sketches\") during the production run, and (2) relying on an intelligent replayer during diagnosis time (when performance is less critical) to systematically explore the unrecorded non-deterministic space and reproduce the bug. With only partial information, our replayer may require more than one coordinated replay run to reproduce a bug. However, after a bug is reproduced once, PRES can reproduce it every time. We implemented PRES along with five different execution sketching mechanisms. We evaluated them with 11 representative applications, including 4 servers, 3 desktop client applications, and 4 scientific graphics applications, with 13 real-world concurrency bugs of different types, including atomicity violations, order violations and deadlocks. PRES (with synchronization or system call sketching) significantly lowered the production-run recording overhead of previous approaches (by up to 4416 times), while still reproducing most tested bugs in fewer than 10 replay attempts. Moreover, PRES scaled well with the number of processors; PRES's feedback generation from unsuccessful replays is critical in bug reproduction.",
"Current shared memory multicore and multiprocessor systems are nondeterministic. Each time these systems execute a multithreaded application, even if supplied with the same input, they can produce a different output. This frustrates debugging and limits the ability to properly test multithreaded code, becoming a major stumbling block to the much-needed widespread adoption of parallel programming. In this paper we make the case for fully deterministic shared memory multiprocessing (DMP). The behavior of an arbitrary multithreaded program on a DMP system is only a function of its inputs. The core idea is to make inter-thread communication fully deterministic. Previous approaches to coping with nondeterminism in multithreaded programs have focused on replay, a technique useful only for debugging. In contrast, while DMP systems are directly useful for debugging by offering repeatability by default, we argue that parallel programs should execute deterministically in the field as well. This has the potential to make testing more assuring and increase the reliability of deployed multithreaded software. We propose a range of approaches to enforcing determinism and discuss their implementation trade-offs. We show that determinism can be provided with little performance cost using our architecture proposals on future hardware, and that software-only approaches can be utilized on existing systems.",
"Although chip-multiprocessors have become the industry standard, developing parallel applications that target them remains a daunting task. Non-determinism, inherent in threaded applications, causes significant challenges for parallel programmers by hindering their ability to create parallel applications with repeatable results. As a consequence, parallel applications are significantly harder to debug, test, and maintain than sequential programs. This paper introduces Kendo: a new software-only system that provides deterministic multithreading of parallel applications. Kendo enforces a deterministic interleaving of lock acquisitions and specially declared non-protected reads through a novel dynamically load-balanced deterministic scheduling algorithm. The algorithm tracks the progress of each thread using performance counters to construct a deterministic logical time that is used to compute an interleaving of shared data accesses that is both deterministic and provides good load balancing. Kendo can run on today's commodity hardware while incurring only a modest performance cost. Experimental results on the SPLASH-2 applications yield a geometric mean overhead of only 16 when running on 4 processors. This low overhead makes it possible to benefit from Kendo even after an application is deployed. Programmers can start using Kendo today to program parallel applications that are easier to develop, debug, and test."
]
}
|
0910.4397
|
2950422405
|
This paper investigates the problem of determining a binary-valued function through a sequence of strategically selected queries. The focus is an algorithm called Generalized Binary Search (GBS). GBS is a well-known greedy algorithm for determining a binary-valued function through a sequence of strategically selected queries. At each step, a query is selected that most evenly splits the hypotheses under consideration into two disjoint subsets, a natural generalization of the idea underlying classic binary search. This paper develops novel incoherence and geometric conditions under which GBS achieves the information-theoretically optimal query complexity; i.e., given a collection of N hypotheses, GBS terminates with the correct function after no more than a constant times log N queries. Furthermore, a noise-tolerant version of GBS is developed that also achieves the optimal query complexity. These results are applied to learning halfspaces, a problem arising routinely in image processing and machine learning.
|
Generalized binary search can be viewed as a generalization of classic binary search, Shannon-Fano coding as noted by Goodman and Smyth @cite_39 , and channel coding with noiseless feedback as studied by Horstein @cite_1 . Problems of this nature arise in many applications, including channel coding (e.g., the work of Horstein @cite_1 and Zigangirov @cite_17 ), experimental design (e.g., as studied by R 'e nyi @cite_6 @cite_26 ), disease diagnosis (e.g., see the work of Loveland @cite_41 ), fault-tolerant computing (e.g., the work of @cite_29 ), the scheduling problem considered by @cite_34 , computer vision problems investigated by Geman and Jedynak @cite_14 and @cite_30 ), image processing problems studied by Korostelev and Kim @cite_32 @cite_8 , and active learning research; for example the investigations by @cite_3 , Dasgupta @cite_4 , @cite_20 , and Castro and Nowak @cite_38 .
|
{
"cite_N": [
"@cite_30",
"@cite_38",
"@cite_26",
"@cite_14",
"@cite_4",
"@cite_8",
"@cite_41",
"@cite_29",
"@cite_1",
"@cite_32",
"@cite_6",
"@cite_39",
"@cite_3",
"@cite_34",
"@cite_20",
"@cite_17"
],
"mid": [
"2123260089",
"2106447856",
"",
"2128439793",
"2151596305",
"",
"1975096669",
"2038435918",
"",
"2041021469",
"",
"2024262734",
"208956922",
"1843157236",
"2128518360",
""
],
"abstract": [
"A fundamental problem in model-based computer vision is that of identifying which of a given set of geometric models is present in an image. Considering a \"probe\" to be an oracle that tells us whether or not a model is present at a given point, we study the problem of computing efficient strategies (\"decision trees\") for probing an image, with the goal to minimize the number of probes necessary (in the worst case) to determine which single model is present. We show that a ⌈lg k⌉ height binary decision tree always exists for k polygonal models (in fixed position), provided (1) they are non-degenerate (do not share boundaries) and (2) they share a common point of intersection. Further, we give an efficient algorithm for constructing such decision tress when the models are given as a set of polygons in the plane. We show that constructing a minimum height tree is NP-complete if either of the two assumptions is omitted. We provide an efficient greedy heuristic strategy and show that, in the general case, it yields a decision tree whose height is at most ⌈lg k⌉ times that of an optimal tree. Finally, we discuss some restricted cases whose special structure allows for improved results.",
"This paper analyzes the potential advantages and theoretical challenges of \"active learning\" algorithms. Active learning involves sequential sampling procedures that use information gleaned from previous samples in order to focus the sampling and accelerate the learning process relative to \"passive learning\" algorithms, which are based on nonadaptive (usually random) samples. There are a number of empirical and theoretical results suggesting that in certain situations active learning can be significantly more effective than passive learning. However, the fact that active learning algorithms are feedback systems makes their theoretical analysis very challenging. This paper aims to shed light on achievable limits in active learning. Using minimax analysis techniques, we study the achievable rates of classification error convergence for broad classes of distributions characterized by decision boundary regularity and noise conditions. The results clearly indicate the conditions under which one can expect significant gains through active learning. Furthermore, we show that the learning rates derived are tight for \"boundary fragment\" classes in d-dimensional feature spaces when the feature marginal density is bounded from above and below.",
"",
"We present a new approach for tracking roads from satellite images, and thereby illustrate a general computational strategy (\"active testing\") for tracking 1D structures and other recognition tasks in computer vision. Our approach is related to recent work in active vision on \"where to look next\" and motivated by the \"divide-and-conquer\" strategy of parlour games. We choose \"tests\" (matched filters for short road segments) one at a time in order to remove as much uncertainty as possible about the \"true hypothesis\" (road position) given the results of the previous tests. The tests are chosen online based on a statistical model for the joint distribution of tests and hypotheses. The problem of minimizing uncertainty (measured by entropy) is formulated in simple and explicit analytical terms. At each iteration new image data are examined and a new entropy minimization problem is solved (exactly), resulting in a new image location to inspect, and so forth. We report experiments using panchromatic SPOT satellite imagery with a ground resolution of ten meters.",
"We abstract out the core search problem of active learning schemes, to better understand the extent to which adaptive labeling can improve sample complexity. We give various upper and lower bounds on the number of labels which need to be queried, and we prove that a popular greedy active learning rule is approximately as good as any other strategy for minimizing this number of labels.",
"",
"Binary testing concerns finding good algorithms to solve the class of binary identification problems. A binary identification problem has as input a set of objects, including one regarded as distinguished (e.g., faulty), for each object an a priori estimate that it is the distinguished object, and a set of tests. Output is a testing procedure to isolate the distinguished object. One seeks minimal cost testing procedures where cost is the average cost of isolation, summed over all objects. This is a problem schema for the diagnosis problem: applications occur in medicine, systematic biology, machine fault location, quality control and elsewhere.",
"This paper studies the depth of noisy decision trees in which each node gives the wrong answer with some constant probability. In the noisy Boolean decision tree model, tight bounds are given on the number of queries to input variables required to compute threshold functions, the parity function and symmetric functions. In the noisy comparison tree model, tight bounds are given on the number of noisy comparisons for searching, sorting, selection and merging. The paper also studies parallel selection and sorting with noisy comparisons, giving tight bounds for several problems.",
"",
"A binary image model is studied with a Lipschitz edge function. The indicator function of the image is observed in random noise at n design points that can be chosen sequentially. The asymptotically minimax rate as n-->[infinity] is found in estimating the edge function, and an asymptotically optimal algorithm is described.",
"",
"A communication theory approach to decision tree design based on a top-town mutual information algorithm is presented. It is shown that this algorithm is equivalent to a form of Shannon-Fano prefix coding, and several fundamental bounds relating decision-tree parameters are derived. The bounds are used in conjunction with a rate-distortion interpretation of tree design to explain several phenomena previously observed in practical decision-tree design. A termination rule for the algorithm called the delta-entropy rule is proposed that improves its robustness in the presence of noise. Simulation results are presented, showing that the tree classifiers derived by the algorithm compare favourably to the single nearest neighbour classifier. >",
"A process for continously annealing a fused cast refractory body, in order to obtain a crack-free product, in which the fused cast refractory body is placed on a suitable preheated carrier which has been previously heated to a temperature of greater than 500 DEG C in a preheat chamber, and is charged into an annealing tunnel which is divided into a hot zone and a cooling zone having at least one cooling section.",
"We introduce and study a problem that we refer to as the optimal split tree problem. The problem generalizes a number of problems including two classical tree construction problems including the Huffman tree problem and the optimal alphabetic tree. We show that the general split tree problem is NP-complete and analyze a greedy algorithm for its solution. We show that a simple modification of the greedy algorithm guarantees O(log n) approximation ratio. We construct an example for which this algorithm achieves Ω(log n log log n) approximation ratio. We show that if all weights are equal and the optimal split tree is of depth O(log n). then the greedy algorithm guarantees O(log n log log n) approximation ratio. We also extend our approximation algorithm to the construction of a search tree for partially ordered sets.",
"We present a framework for margin based active learning of linear separators. We instantiate it for a few important cases, some of which have been previously considered in the literature.We analyze the effectiveness of our framework both in the realizable case and in a specific noisy setting related to the Tsybakov small noise condition.",
""
]
}
|
0910.4397
|
2950422405
|
This paper investigates the problem of determining a binary-valued function through a sequence of strategically selected queries. The focus is an algorithm called Generalized Binary Search (GBS). GBS is a well-known greedy algorithm for determining a binary-valued function through a sequence of strategically selected queries. At each step, a query is selected that most evenly splits the hypotheses under consideration into two disjoint subsets, a natural generalization of the idea underlying classic binary search. This paper develops novel incoherence and geometric conditions under which GBS achieves the information-theoretically optimal query complexity; i.e., given a collection of N hypotheses, GBS terminates with the correct function after no more than a constant times log N queries. Furthermore, a noise-tolerant version of GBS is developed that also achieves the optimal query complexity. These results are applied to learning halfspaces, a problem arising routinely in image processing and machine learning.
|
Past work has provided a partial characterization of this problem. If the responses to queries are noiseless, then selecting the sequence of queries from @math is equivalent to determining a binary decision tree, where a sequence of queries defines a path from the root of the tree (corresponding to @math ) to a leaf (corresponding to a single element of @math ). In general the determination of the optimal (worst- or average-case) tree is NP-complete as shown by Hyafil and Rivest @cite_12 . However, there exists a greedy procedure that yields query sequences that are within a factor of @math of the optimal search tree depth; this result has been discovered independently by several researchers including Loveland @cite_41 , Garey and Graham @cite_41 , @cite_15 , and Dasgupta @cite_4 . The greedy procedure is referred to here as Generalized Binary Search (GBS) or the splitting algorithm , and it reduces to classic binary search, as discussed in .
|
{
"cite_N": [
"@cite_41",
"@cite_15",
"@cite_4",
"@cite_12"
],
"mid": [
"1975096669",
"2061143573",
"2151596305",
"1970074386"
],
"abstract": [
"Binary testing concerns finding good algorithms to solve the class of binary identification problems. A binary identification problem has as input a set of objects, including one regarded as distinguished (e.g., faulty), for each object an a priori estimate that it is the distinguished object, and a set of tests. Output is a testing procedure to isolate the distinguished object. One seeks minimal cost testing procedures where cost is the average cost of isolation, summed over all objects. This is a problem schema for the diagnosis problem: applications occur in medicine, systematic biology, machine fault location, quality control and elsewhere.",
"In machine fault-location, medical diagnosis, species identification, and computer decisionmaking, one is often required to identify some unknown object or condition, belonging to a known set of M possibilities, by applying a sequence of binary-valued tests, which are selected from a given set of available tests. One would usually prefer such a testing procedure which minimizes or nearly minimizes the expected testing cost for identification. Existing methods for determining a minimal expected cost testing procedure, however, require a number of operations which increases exponentially with M and become infeasible for solving problems of even moderate size. Thus, in practice, one instead uses fast, heuristic methods which hopefully obtain low cost testing procedures, but which do not guarantee a minimal cost solution. Examining the important case in which all M possibilities are equally likely, we derive a number of cost-bounding results for the most common heuristic procedure, which always applies next that test yielding maximum information gain per unit cost. In particular, we show that solutions obtained using this method can have expected cost greater than an arbitrary multiple of the optimal expected cost.",
"We abstract out the core search problem of active learning schemes, to better understand the extent to which adaptive labeling can improve sample complexity. We give various upper and lower bounds on the number of labels which need to be queried, and we prove that a popular greedy active learning rule is approximately as good as any other strategy for minimizing this number of labels.",
""
]
}
|
0910.3119
|
2950432800
|
In this paper, we propose a structured peer-to-peer (P2P) distribution scheme based on Fast Fourier Transform (FFT) graphs. We build a peer-to-peer network that reproduces the FFT graph initially designed for hardware FFT codecs. This topology allows content delivery with a maximum diversity level for a minimum global complexity. The resulting FFTbased network is a structured architecture with an adapted network coding that brings flexibility upon content distribution and robustness upon the dynamic nature of the network. This structure can achieve optimal capacity in terms of content recovery while solving the problem of last remaining blocks, even for large networks.
|
Structured and hierarchical topologies can also be designed for content delivery. The network can be built with a single tree-based approach @cite_11 or with a more sophisticated multi tree-based approach @cite_13 @cite_2 . In this case, in any branch of the tree, a node, usually the fastest one, is designated to be the head of the subset and represents the branch to interact with the upper layers. This node is also responsible for delivering the content from the upper layers to the other nodes of the subset. However, this scheme suffers from the same problem than the replication approach in a P2P scope as this node can leave at any time, leaving the other nodes and the lower layers orphans. In order to have more reliable connections, multi-head schemes have also been investigated @cite_9 , with delivery and representation to upper layers separated. Nevertheless, this solution remains more complex to build.
|
{
"cite_N": [
"@cite_2",
"@cite_9",
"@cite_13",
"@cite_11"
],
"mid": [
"2127494222",
"2135593270",
"1486751828",
"2129807746"
],
"abstract": [
"In tree-based multicast systems, a relatively small number of interior nodes carry the load of forwarding multicast messages. This works well when the interior nodes are highly-available, dedicated infrastructure routers but it poses a problem for application-level multicast in peer-to-peer systems. SplitStream addresses this problem by striping the content across a forest of interior-node-disjoint multicast trees that distributes the forwarding load among all participating peers. For example, it is possible to construct efficient SplitStream forests in which each peer contributes only as much forwarding bandwidth as it receives. Furthermore, with appropriate content encodings, SplitStream is highly robust to failures because a node failure causes the loss of a single stripe on average. We present the design and implementation of SplitStream and show experimental results obtained on an Internet testbed and via large-scale network simulation. The results show that SplitStream distributes the forwarding load among all peers and can accommodate peers with different bandwidth capacities while imposing low overhead for forest construction and maintenance.",
"Given that the Internet does not widely support Internet protocol multicast while content-distribution-network technologies are costly, the concept of peer-to-peer could be a promising start for enabling large-scale streaming systems. In our so-called Zigzag approach, we propose a method for clustering peers into a hierarchy called the administrative organization for easy management, and a method for building the multicast tree atop this hierarchy for efficient content transmission. In Zigzag, the multicast tree has a height logarithmic with the number of clients, and a node degree bounded by a constant. This helps reduce the number of processing hops on the delivery path to a client while avoiding network bottlenecks. Consequently, the end-to-end delay is kept small. Although one could build a tree satisfying such properties easily, an efficient control protocol between the nodes must be in place to maintain the tree under the effects of network dynamics. Zigzag handles such situations gracefully, requiring a constant amortized worst-case control overhead. Especially, failure recovery is done regionally with impact on, at most, a constant number of existing clients and with mostly no burden on the server.",
"Currently, the only way to disseminate streaming media to many users is to pay for lots of bandwidth. A more democratic alternative would be for interested users to donate bandwidth to help disseminate the data further. In this paper we discuss the design of P2PCast, a completely decentralized, scalable, fault-tolerant self-organizing system aimed at being able to stream content to thousands of nodes from behind a relatively low-bandwidth network. Our system leverages the full bandwidth that has been committed by its users by striping the data which also enhances fault-tolerance. We propose a novel algorithm for managing these stripes as a forest of multicast trees in a systematic fashion under stress conditions. Finally, we discuss a prototype implementation of our system using libasync and sketch some preliminary results.",
"We describe a new scalable application-layer multicast protocol, specifically designed for low-bandwidth, data streaming applications with large receiver sets. Our scheme is based upon a hierarchical clustering of the application-layer multicast peers and can support a number of different data delivery trees with desirable properties.We present extensive simulations of both our protocol and the Narada application-layer multicast protocol over Internet-like topologies. Our results show that for groups of size 32 or more, our protocol has lower link stress (by about 25 ), improved or similar end-to-end latencies and similar failure recovery properties. More importantly, it is able to achieve these results by using orders of magnitude lower control traffic.Finally, we present results from our wide-area testbed in which we experimented with 32-100 member groups distributed over 8 different sites. In our experiments, average group members established and maintained low-latency paths and incurred a maximum packet loss rate of less than 1 as members randomly joined and left the multicast group. The average control overhead during our experiments was less than 1 Kbps for groups of size 100."
]
}
|
0910.1879
|
2166163936
|
We present novel techniques for analyzing the problem of low-rank matrix recovery. The methods are both considerably simpler and more general than previous approaches. It is shown that an unknown matrix of rank can be efficiently reconstructed from only randomly sampled expansion coefficients with respect to any given matrix basis. The number quantifies the “degree of incoherence” between the unknown matrix and the basis. Existing work concentrated mostly on the problem of “matrix completion” where one aims to recover a low-rank matrix from randomly selected matrix elements. Our result covers this situation as a special case. The proof consists of a series of relatively elementary steps, which stands in contrast to the highly involved methods previously employed to obtain comparable results. In cases where bounds had been known before, our estimates are slightly tighter. We discuss operator bases which are incoherent to all low-rank matrices simultaneously. For these bases, we show that randomly sampled expansion coefficients suffice to recover any low-rank matrix with high probability. The latter bound is tight up to multiplicative constants.
|
We first published these results in @cite_29 , a short paper written with a physics audience in mind. This pre-print contains all the main ideas of the current work, and a complete proof of Theorem for Fourier-type bases (the case of interest in quantum tomography). We announced in @cite_29 that a more detailed exposition of the new method, applying to the general low-rank matrix recovery problem with respect to arbitrary bases, was in preparation.
|
{
"cite_N": [
"@cite_29"
],
"mid": [
"1529624360"
],
"abstract": [
"We establish methods for quantum state tomography based on compressed sensing. These methods are specialized for quantum states that are fairly pure, and they offer a significant performance improvement on large quantum systems. In particular, they are able to reconstruct an unknown density matrix of dimension d and rank r using O(rdlog^2d) measurement settings, compared to standard methods that require d^2 settings. Our methods have several features that make them amenable to experimental implementation: they require only simple Pauli measurements, use fast convex optimization, are stable against noise, and can be applied to states that are only approximately low rank. The acquired data can be used to certify that the state is indeed close to pure, so no a priori assumptions are needed."
]
}
|
0910.1879
|
2166163936
|
We present novel techniques for analyzing the problem of low-rank matrix recovery. The methods are both considerably simpler and more general than previous approaches. It is shown that an unknown matrix of rank can be efficiently reconstructed from only randomly sampled expansion coefficients with respect to any given matrix basis. The number quantifies the “degree of incoherence” between the unknown matrix and the basis. Existing work concentrated mostly on the problem of “matrix completion” where one aims to recover a low-rank matrix from randomly selected matrix elements. Our result covers this situation as a special case. The proof consists of a series of relatively elementary steps, which stands in contrast to the highly involved methods previously employed to obtain comparable results. In cases where bounds had been known before, our estimates are slightly tighter. We discuss operator bases which are incoherent to all low-rank matrices simultaneously. For these bases, we show that randomly sampled expansion coefficients suffice to recover any low-rank matrix with high probability. The latter bound is tight up to multiplicative constants.
|
Before this extended version of @cite_29 had been completed, another pre-print @cite_7 building on @cite_29 appeared. The author of @cite_7 presents our methods in a language more suitable for an audience from mathematics or information theory. He also presents another special case of the results announced in @cite_29 : the reconstruction of low-rank matrices from randomly sampled matrix elements. The main proof techniques in @cite_7 are identical to those of @cite_29 , with two exceptions. First, the author independently found the same modification we are using here to extend the methods from Fourier-type matrices to bases with larger operator norm (his Lemma 3.6, our Lemma ). Second, his proof works more directly with non-Hermitian matrices, and gives tighter bounds in the case of non-square matrices.
|
{
"cite_N": [
"@cite_29",
"@cite_7"
],
"mid": [
"1529624360",
"2951810887"
],
"abstract": [
"We establish methods for quantum state tomography based on compressed sensing. These methods are specialized for quantum states that are fairly pure, and they offer a significant performance improvement on large quantum systems. In particular, they are able to reconstruct an unknown density matrix of dimension d and rank r using O(rdlog^2d) measurement settings, compared to standard methods that require d^2 settings. Our methods have several features that make them amenable to experimental implementation: they require only simple Pauli measurements, use fast convex optimization, are stable against noise, and can be applied to states that are only approximately low rank. The acquired data can be used to certify that the state is indeed close to pure, so no a priori assumptions are needed.",
"This paper provides the best bounds to date on the number of randomly sampled entries required to reconstruct an unknown low rank matrix. These results improve on prior work by Candes and Recht, Candes and Tao, and Keshavan, Montanari, and Oh. The reconstruction is accomplished by minimizing the nuclear norm, or sum of the singular values, of the hidden matrix subject to agreement with the provided entries. If the underlying matrix satisfies a certain incoherence condition, then the number of entries required is equal to a quadratic logarithmic factor times the number of parameters in the singular value decomposition. The proof of this assertion is short, self contained, and uses very elementary analysis. The novel techniques herein are based on recent work in quantum information theory."
]
}
|
0910.2042
|
2951767789
|
Consider the standard linear regression model @math , where @math is an observation vector, @math is a design matrix, @math is the unknown regression vector, and @math is additive Gaussian noise. This paper studies the minimax rates of convergence for estimation of @math for @math -losses and in the @math -prediction loss, assuming that @math belongs to an @math -ball @math for some @math . We show that under suitable regularity conditions on the design matrix @math , the minimax error in @math -loss and @math -prediction loss scales as @math . In addition, we provide lower bounds on minimax risks in @math -norms, for all @math . Our proofs of the lower bounds are information-theoretic in nature, based on Fano's inequality and results on the metric entropy of the balls @math , whereas our proofs of the upper bounds are direct and constructive, involving direct analysis of least-squares over @math -balls. For the special case @math , a comparison with @math -risks achieved by computationally efficient @math -relaxations reveals that although such methods can achieve the minimax rates up to constant factors, they require slightly stronger assumptions on the design matrix @math than algorithms involving least-squares over the @math -ball.
|
Naturally, our work also has some connections to the vast body of work on @math -based methods for sparse estimation, particularly for the case of hard sparsity ( @math ). Based on our results, the rates that are achieved by @math -methods, such as the Lasso and the Dantzig selector, are minimax optimal for @math -loss, but require somewhat stronger conditions on the design matrix than an optimal'' algorithm, which is based on searching the @math -ball. We compare the conditions that we impose in our minimax analysis to various conditions imposed in the analysis of @math -based methods, including the restricted isometry property of Candes and Tao @cite_26 , the restricted eigenvalue condition imposed in Menshausen and Yu @cite_9 , the partial Riesz condition in Zhang and Huang @cite_24 and the restricted eigenvalue condition of @cite_28 . We find that optimal'' methods, which are based on minimizing least-squares directly over the @math -ball, can succeed for design matrices where @math -based methods are not known to work.
|
{
"cite_N": [
"@cite_24",
"@cite_28",
"@cite_9",
"@cite_26"
],
"mid": [
"2147631685",
"2116581043",
"",
"2034260606"
],
"abstract": [
"Meinshausen and Buhlmann [Ann. Statist. 34 (2006) 1436-1462] showed that, for neighborhood selection in Gaussian graphical models, under a neighborhood stability condition, the LASSO is consistent, even when the number of variables is of greater order than the sample size. Zhao and Yu [(2006) J. Machine Learning Research 7 2541-2567] formalized the neighborhood stability condition in the context of linear regression as a strong irrepresentable condition. That paper showed that under this condition, the LASSO selects exactly the set of nonzero regression coefficients, provided that these coefficients are bounded away from zero at a certain rate. In this paper, the regression coefficients outside an ideal model are assumed to be small, but not necessarily zero. Under a sparse Riesz condition on the correlation of design variables, we prove that the LASSO selects a model of the correct order of dimensionality, controls the bias of the selected model at a level determined by the contributions of small regression coefficients and threshold bias, and selects all coefficients of greater order than the bias of the selected model. Moreover, as a consequence of this rate consistency of the LASSO in model selection, it is proved that the sum of error squares for the mean response and the l α -loss for the regression coefficients converge at the best possible rates under the given conditions. An interesting aspect of our results is that the logarithm of the number of variables can be of the same order as the sample size for certain random dependent designs.",
"We show that, under a sparsity scenario, the Lasso estimator and the Dantzig selector exhibit similar behavior. For both methods, we derive, in parallel, oracle inequalities for the prediction risk in the general nonparametric regression model, as well as bounds on the l p estimation loss for 1 ≤ p ≤ 2 in the linear model when the number of variables can be much larger than the sample size.",
"",
"In many important statistical applications, the number of variables or parameters p is much larger than the number of observations n. Suppose then that we have observations y=Xβ+z, where β∈Rp is a parameter vector of interest, X is a data matrix with possibly far fewer rows than columns, n≪p, and the zi’s are i.i.d. N(0, σ^2). Is it possible to estimate β reliably based on the noisy data y?"
]
}
|
0910.2655
|
1581793925
|
In this note we consider the following problem to study the effect of malicious players on the social optimum in load balancing games: Consider two players SOC and MAL controlling (1-f) and f fraction of the flow in a load balancing game. SOC tries to minimize the total cost faced by her players while MAL tries to maximize the same. If the latencies are linear, we show that this 2-player zero-sum game has a pure strategy Nash equilibrium. Moreover, we show that one of the optimal strategies for MAL is to play selfishly: let the f fraction of the flow be sent as when the flow was controlled by infinitesimal players playing selfishly and reaching a Nash equilibrium. This shows that a malicious player cannot cause more harm in this game than a set of selfish agents. We also introduce the notion of Cost of Malice - the ratio of the cost faced by SOC at equilibrium to (1-f)OPT, where OPT is the social optimum minimizing the cost of all the players. In linear load balancing games we bound the cost of malice by (1+f 2).
|
In the computer science community, there have been a number of studies investigating the effect of malicious agents in congestion games. The first work closest to our setting was by Karakostas and Viglas @cite_7 who study how the presence of malicious agents in a congestion game effects the price of anarchy. That is, how the ratio between the cost of agents in a Nash equilibrium and the social optimum, which the authors call the coordination ratio, change with the amount of malice in the system. More recently, Babaioff @cite_1 consider the malicious agent in the game along with the infinitesimal players and show that for general latency functions, the game need not have a pure Nash equilibrium, and they show the existence of a mixed Nash equilibrium. Moreover, they define the Price of Malice as the rate of deterioration in the total performance of the remaining selfish players per unit flow of malice and show that there exist networks where the price of malice may indeed be negative, while on the other hand there are networks where the price could be quite large.
|
{
"cite_N": [
"@cite_1",
"@cite_7"
],
"mid": [
"2124034793",
"2075488114"
],
"abstract": [
"We study the equilibria of non-atomic congestion games in which there are two types of players: rational players, who seek to minimize their own delay, and malicious players, who seek to maximize the average delay experienced by the rational players. We study the existence of pure and mixed Nash equilibria for these games, and we seek to quantify the impact of the malicious players on the equilibrium. One counter intuitive phenomenon which we demonstrate is the \"windfall of malice\": paradoxically, when a myopically malicious player gains control of a fraction of the flow, a fraction of the players change from rational to malicious, the new equilibrium may be more favorable for the remaining rational players than the previous equilibrium.",
"We consider the problem of characterizing user equilibria and optimal solutions for selfish routing in a given network. We extend the known models by considering malicious behavior. While selfish users follow a strategy that minimizes their individual cost, a malicious user will use his flow through the network in an effort to cause the maximum possible damage to the overall cost. We define a generalized model, present characterizations of flows at equilibrium and prove bounds for the ratio of the social cost of a flow at equilibrium over the cost when centralized coordination among users is allowed."
]
}
|
0910.2655
|
1581793925
|
In this note we consider the following problem to study the effect of malicious players on the social optimum in load balancing games: Consider two players SOC and MAL controlling (1-f) and f fraction of the flow in a load balancing game. SOC tries to minimize the total cost faced by her players while MAL tries to maximize the same. If the latencies are linear, we show that this 2-player zero-sum game has a pure strategy Nash equilibrium. Moreover, we show that one of the optimal strategies for MAL is to play selfishly: let the f fraction of the flow be sent as when the flow was controlled by infinitesimal players playing selfishly and reaching a Nash equilibrium. This shows that a malicious player cannot cause more harm in this game than a set of selfish agents. We also introduce the notion of Cost of Malice - the ratio of the cost faced by SOC at equilibrium to (1-f)OPT, where OPT is the social optimum minimizing the cost of all the players. In linear load balancing games we bound the cost of malice by (1+f 2).
|
As we note in the introduction, the focus of the works above were on the effect of malice on the equilibrium of the system: how the equilibrium degrades and how the game between the selfish agents and the malicious agent takes place. On the other hand we are more interested in how the presence of malice affects the social optimum itself. Thus our work complements the works of Karakostas and Viglas @cite_7 and Babaioff @cite_1 .
|
{
"cite_N": [
"@cite_1",
"@cite_7"
],
"mid": [
"2124034793",
"2075488114"
],
"abstract": [
"We study the equilibria of non-atomic congestion games in which there are two types of players: rational players, who seek to minimize their own delay, and malicious players, who seek to maximize the average delay experienced by the rational players. We study the existence of pure and mixed Nash equilibria for these games, and we seek to quantify the impact of the malicious players on the equilibrium. One counter intuitive phenomenon which we demonstrate is the \"windfall of malice\": paradoxically, when a myopically malicious player gains control of a fraction of the flow, a fraction of the players change from rational to malicious, the new equilibrium may be more favorable for the remaining rational players than the previous equilibrium.",
"We consider the problem of characterizing user equilibria and optimal solutions for selfish routing in a given network. We extend the known models by considering malicious behavior. While selfish users follow a strategy that minimizes their individual cost, a malicious user will use his flow through the network in an effort to cause the maximum possible damage to the overall cost. We define a generalized model, present characterizations of flows at equilibrium and prove bounds for the ratio of the social cost of a flow at equilibrium over the cost when centralized coordination among users is allowed."
]
}
|
0910.2655
|
1581793925
|
In this note we consider the following problem to study the effect of malicious players on the social optimum in load balancing games: Consider two players SOC and MAL controlling (1-f) and f fraction of the flow in a load balancing game. SOC tries to minimize the total cost faced by her players while MAL tries to maximize the same. If the latencies are linear, we show that this 2-player zero-sum game has a pure strategy Nash equilibrium. Moreover, we show that one of the optimal strategies for MAL is to play selfishly: let the f fraction of the flow be sent as when the flow was controlled by infinitesimal players playing selfishly and reaching a Nash equilibrium. This shows that a malicious player cannot cause more harm in this game than a set of selfish agents. We also introduce the notion of Cost of Malice - the ratio of the cost faced by SOC at equilibrium to (1-f)OPT, where OPT is the social optimum minimizing the cost of all the players. In linear load balancing games we bound the cost of malice by (1+f 2).
|
The study of malice has not been restricted to the congestion game setting. Moscibroda @cite_2 study the effect of malicious agents in a virus inoculation game. Indeed, they also define a notion of price of malice as the ratio of the cost of an equilibrium setting with and without malicious agents. The definition is not quite same as that of Babaioff @cite_1 . In fact, to avoid a third price of malice" definition, we call the effect of malice to the optimum, the cost of malice'' instead.
|
{
"cite_N": [
"@cite_1",
"@cite_2"
],
"mid": [
"2124034793",
"2140356130"
],
"abstract": [
"We study the equilibria of non-atomic congestion games in which there are two types of players: rational players, who seek to minimize their own delay, and malicious players, who seek to maximize the average delay experienced by the rational players. We study the existence of pure and mixed Nash equilibria for these games, and we seek to quantify the impact of the malicious players on the equilibrium. One counter intuitive phenomenon which we demonstrate is the \"windfall of malice\": paradoxically, when a myopically malicious player gains control of a fraction of the flow, a fraction of the players change from rational to malicious, the new equilibrium may be more favorable for the remaining rational players than the previous equilibrium.",
"Over the last years, game theory has provided great insights into the behavior of distributed systems by modeling the players as utility-maximizing agents. In particular, it has been shown that selfishness causes many systems to perform in a globally suboptimal fashion. Such systems are said to have a large Price of Anarchy. In this paper, we extend this active field of research by allowing some players to be malicious or Byzantine rather than selfish. We ask: What is the impact of Byzantine players on the system's efficiency compared to purely selfish environments or compared to the social optimum? In particular, we introduce the Price of Malice which captures this efficiency degradation. As an example, we analyze the Price of Malice of a game which models the containment of the spread of viruses. In this game, each node can choose whether or not to install anti-virus software. Then, a virus starts from a random node and iteratively infects all neighboring nodes which are not inoculated. We establish various results about this game. For instance, we quantify how much the presence of Byzantine players can deteriorate or---in case of highly risk-averse selfish players---improve the social welfare of the distributed system."
]
}
|
0910.2655
|
1581793925
|
In this note we consider the following problem to study the effect of malicious players on the social optimum in load balancing games: Consider two players SOC and MAL controlling (1-f) and f fraction of the flow in a load balancing game. SOC tries to minimize the total cost faced by her players while MAL tries to maximize the same. If the latencies are linear, we show that this 2-player zero-sum game has a pure strategy Nash equilibrium. Moreover, we show that one of the optimal strategies for MAL is to play selfishly: let the f fraction of the flow be sent as when the flow was controlled by infinitesimal players playing selfishly and reaching a Nash equilibrium. This shows that a malicious player cannot cause more harm in this game than a set of selfish agents. We also introduce the notion of Cost of Malice - the ratio of the cost faced by SOC at equilibrium to (1-f)OPT, where OPT is the social optimum minimizing the cost of all the players. In linear load balancing games we bound the cost of malice by (1+f 2).
|
Our work is in some sense similar to the work on Stackelberg strategies started by Roughgarden @cite_4 and later works by Karakostas and Kolliopoulos @cite_5 , Swamy @cite_6 and Sharma and Williamson @cite_3 . In Stackelberg games there exists a leader who controls some amount of flow which he plays first to which the remaining selfish players respond. It is instructive to compare this leader with our SOC player. In fact, our proof of bounding the cost of malice goes via the strategy SCALE in the paper of Roughgarden @cite_4 .
|
{
"cite_N": [
"@cite_5",
"@cite_3",
"@cite_4",
"@cite_6"
],
"mid": [
"2039786295",
"2143110141",
"2012552037",
""
],
"abstract": [
"A natural generalization of the selfish routing setting arises when some of the users obey a central coordinating authority, while the rest act selfishly. Such behavior can be modeled by dividing the users into an α fraction that are routed according to the central coordinator’s routing strategy (Stackelberg strategy), and the remaining 1−α that determine their strategy selfishly, given the routing of the coordinated users. One seeks to quantify the resulting price of anarchy, i.e., the ratio of the cost of the worst traffic equilibrium to the system optimum, as a function of α. It is well-known that for α=0 and linear latency functions the price of anarchy is at most 4 3 (J. ACM 49, 236–259, 2002). If α tends to 1, the price of anarchy should also tend to 1 for any reasonable algorithm used by the coordinator. We analyze two such algorithms for Stackelberg routing, LLF and SCALE. For general topology networks, multicommodity users, and linear latency functions, we show a price of anarchy bound for SCALE which decreases from 4 3 to 1 as α increases from 0 to 1, and depends only on α. Up to this work, such a tradeoff was known only for the case of two nodes connected with parallel links (SIAM J. Comput. 33, 332–350, 2004), while for general networks it was not clear whether such a result could be achieved, even in the single-commodity case. We show a weaker bound for LLF and also some extensions to general latency functions. The existence of a central coordinator is a rather strong requirement for a network. We show that we can do away with such a coordinator, as long as we are allowed to impose taxes (tolls) on the edges in order to steer the selfish users towards an improved system cost. As long as there is at least a fraction α of users that pay their taxes, we show the existence of taxes that lead to the simulation of SCALE by the tax-payers. The extension of the results mentioned above quantifies the improvement on the system cost as the number of tax-evaders decreases.",
"Noncooperative network routing games are a natural model of userstrying to selfishly route flow through a network in order to minimize their own delays. It is well known that the solution resulting from this selfish routing (called the Nash equilibrium) can have social cost strictly higher than the cost of the optimum solution. One way to improve the quality of the resulting solution is to centrally control a fraction of the flow. A natural problem for the network administrator then is to route the centrally controlled flow in such a way that the overall cost of the solution is minimized after the remaining fraction has routed itself selfishl. This problem falls in the class of well-studied Stackelberg routing games. We consider the scenario where the network administrator wants the final solution to be (strictly) better than the Nash equilibrium. In other words, she wants to control enough flow such that the cost of the resulting solution is strictly less than the cost of the Nash equilibrium. We call the minimum fraction of users that must be centrally routed to improve the quality of the resulting solution the Stackelberg threshold. We give a closed form expression for the Stackelberg threshold for parallel links networks with linear latency functions. The expression is in terms of Nash equilibrium flows and optimum flows. It turns out that the Stackelberg threshold is the minimum of Nash flows on links which have more optimum flow than Nash flow. Using our approach to characterize the Stackelberg thresholds, we are able to give a simpler proof of an earlier result which finds the minimum fraction required to be centrally controlled to induce an optimum solution.",
"We study the problem of optimizing the performance of a system shared by selfish, noncooperative users. We consider the concrete setting of scheduling jobs on a set of shared machines with load-dependent latency functions specifying the length of time necessary to complete a job; we measure system performance by the total latency of the system. Assigning jobs according to the selfish interests of individual users (who wish to minimize only the latency that their own jobs experience) typically results in suboptimal system performance. However, in many systems of this type there is a mixture of “selfishly controlled” and “centrally controlled” jobs; as the assignment of centrally controlled jobs will influence the subsequent actions by selfish users, we aspire to contain the degradation in system performance due to selfish behavior by scheduling the centrally controlled jobs in the best possible way. We formulate this goal as an optimization problem via Stackelberg games , games in which one player acts a leader (here, the centralized authority interested in optimizing system performance) and the rest as followers (the selfish users). The problem is then to compute a strategy for the leader (a em Stackelberg strategy ) that induces the followers to react in a way that (at least approximately) minimizes the total latency in the system. In this paper, we prove that it is NP-hard to compute the optimal Stackelberg strategy and present simple strategies with provable performance guarantees. More precisely, we give a simple algorithm that computes a strategy inducing a job assignment with total latency no more than a constant times that of the optimal assignment of all of the jobs; in the absence of centrally controlled jobs and a Stackelberg strategy, no result of this type is possible. We also prove stronger performance guarantees in the",
""
]
}
|
0910.0895
|
2953379763
|
We consider the problem of recovering a function over the space of permutations (or, the symmetric group) over @math elements from given partial information; the partial information we consider is related to the group theoretic Fourier Transform of the function. This problem naturally arises in several settings such as ranked elections, multi-object tracking, ranking systems, and recommendation systems. Inspired by the work of Donoho and Stark in the context of discrete-time functions, we focus on non-negative functions with a sparse support (support size @math domain size). Our recovery method is based on finding the sparsest solution (through @math optimization) that is consistent with the available information. As the main result, we derive sufficient conditions for functions that can be recovered exactly from partial information through @math optimization. Under a natural random model for the generation of functions, we quantify the recoverability conditions by deriving bounds on the sparsity (support size) for which the function satisfies the sufficient conditions with a high probability as @math . @math optimization is computationally hard. Therefore, the popular compressive sensing literature considers solving the convex relaxation, @math optimization, to find the sparsest solution. However, we show that @math optimization fails to recover a function (even with constant sparsity) generated using the random model with a high probability as @math . In order to overcome this problem, we propose a novel iterative algorithm for the recovery of functions that satisfy the sufficient conditions. Finally, using an Information Theoretic framework, we study necessary conditions for exact recovery to be possible.
|
The sparsest recovery approach of this paper is similar (in flavor) to the above stated work; in fact, as is shown subsequently, the partial information we consider can be written as a linear transform of the function @math . However, the methods or approaches of the prior work do not apply. Specifically, the work considers finding the sparsest function consistent with the given partial information by solving the corresponding @math relaxation problem. The work derives a necessary and sufficient condition, called the Restricted Nullspace Property , on the structure of the matrix @math that guarantees that the solutions to the @math and @math relaxation problems are the same (see @cite_18 @cite_23 ). However, such sufficient conditions trivially fail in our setup (see @cite_8 ). Therefore, our work provides an alternate set of conditions that guarantee efficient recovery of the sparsest function.
|
{
"cite_N": [
"@cite_18",
"@cite_23",
"@cite_8"
],
"mid": [
"2164452299",
"",
"2570779488"
],
"abstract": [
"Suppose we wish to recover a vector x0 ∈ R m (e.g., a digital signal or image) from incomplete and contaminated observations y = Ax0 + e; A is an n × m",
"",
"Motivated by applications like elections, web-page ranking, revenue maximization etc., we consider the question of inferring popular rankings using constrained data. More specifically, we consider the problem of inferring a probability distribution over the group of permutations using its first order marginals. We first prove that it is not possible to recover more than O(n) permutations over n elements with the given information. We then provide a simple and novel algorithm that can recover up to O(n) permutations under a natural stochastic model; in this sense, the algorithm is optimal. In certain applications, the interest is in recovering only the most popular (or mode) ranking. As a second result, we provide an algorithm based on the Fourier Transform over the symmetric group to recover the mode under a natural majority condition; the algorithm turns out to be a maximum weight matching on an appropriately defined weighted bipartite graph. The questions considered are also thematically related to Fourier Transforms over the symmetric group and the currently popular topic of compressed sensing."
]
}
|
0910.0921
|
2951398855
|
We consider a problem of significant practical importance, namely, the reconstruction of a low-rank data matrix from a small subset of its entries. This problem appears in many areas such as collaborative filtering, computer vision and wireless sensor networks. In this paper, we focus on the matrix completion problem in the case when the observed samples are corrupted by noise. We compare the performance of three state-of-the-art matrix completion algorithms (OptSpace, ADMiRA and FPCA) on a single simulation platform and present numerical results. We show that in practice these efficient algorithms can be used to reconstruct real data matrices, as well as randomly generated matrices, accurately.
|
On the theoretical side, most recent work focuses on algorithms for exactly recovering the unknown low-rank matrix and providing an upper bound on the number of observed entries that guarantee successful recovery with high probability, when the observed set is drawn uniformly at random over all subsets of the same size. The main assumptions of this is that the matrix @math to be recovered has rank @math and that the observed entries are known exactly. Adopting techniques from compressed sensing, Cand e s and Recht introduced a convex relaxation to the NP-hard problem which is to find a minimum rank matrix matching the observed entries @cite_7 . They introduced the concept of incoherence property and proved that for a matrix @math of rank @math which has the incoherence property, solving the convex relaxation correctly recovers the unknown matrix, with high probability, if the number of observed entries @math satisfies, @math .
|
{
"cite_N": [
"@cite_7"
],
"mid": [
"2949959192"
],
"abstract": [
"We consider a problem of considerable practical interest: the recovery of a data matrix from a sampling of its entries. Suppose that we observe m entries selected uniformly at random from a matrix M. Can we complete the matrix and recover the entries that we have not seen? We show that one can perfectly recover most low-rank matrices from what appears to be an incomplete set of entries. We prove that if the number m of sampled entries obeys m >= C n^ 1.2 r log n for some positive numerical constant C, then with very high probability, most n by n matrices of rank r can be perfectly recovered by solving a simple convex optimization program. This program finds the matrix with minimum nuclear norm that fits the data. The condition above assumes that the rank is not too large. However, if one replaces the 1.2 exponent with 1.25, then the result holds for all values of the rank. Similar results hold for arbitrary rectangular matrices as well. Our results are connected with the recent literature on compressed sensing, and show that objects other than signals and images can be perfectly reconstructed from very limited information."
]
}
|
0910.0921
|
2951398855
|
We consider a problem of significant practical importance, namely, the reconstruction of a low-rank data matrix from a small subset of its entries. This problem appears in many areas such as collaborative filtering, computer vision and wireless sensor networks. In this paper, we focus on the matrix completion problem in the case when the observed samples are corrupted by noise. We compare the performance of three state-of-the-art matrix completion algorithms (OptSpace, ADMiRA and FPCA) on a single simulation platform and present numerical results. We show that in practice these efficient algorithms can be used to reconstruct real data matrices, as well as randomly generated matrices, accurately.
|
Recently @cite_6 improved the bound to @math with an extra condition that the matrix has bounded condition number, where the condition number of a matrix is defined as the ratio between the largest singular value and the smallest singular value of @math . We introduced an efficient algorithm called OptSpace , based on spectral methods followed by a local manifold optimization. For a bounded rank @math , the performance bound of OptSpace is order optimal @cite_6 . Cand e s and Tao proved a similar bound @math with a stronger assumption on the original matrix @math , known as the but without any assumption on the condition number of the matrix @math @cite_17 . For any value of @math , it is only suboptimal by a poly-logarithmic factor.
|
{
"cite_N": [
"@cite_6",
"@cite_17"
],
"mid": [
"2949834189",
"2949947345"
],
"abstract": [
"Let M be a random (alpha n) x n matrix of rank r<<n, and assume that a uniformly random subset E of its entries is observed. We describe an efficient algorithm that reconstructs M from |E| = O(rn) observed entries with relative root mean square error RMSE <= C(rn |E|)^0.5 . Further, if r=O(1), M can be reconstructed exactly from |E| = O(n log(n)) entries. These results apply beyond random matrices to general low-rank incoherent matrices. This settles (in the case of bounded rank) a question left open by Candes and Recht and improves over the guarantees for their reconstruction algorithm. The complexity of our algorithm is O(|E|r log(n)), which opens the way to its use for massive data sets. In the process of proving these statements, we obtain a generalization of a celebrated result by Friedman-Kahn-Szemeredi and Feige-Ofek on the spectrum of sparse random matrices.",
"This paper is concerned with the problem of recovering an unknown matrix from a small fraction of its entries. This is known as the matrix completion problem, and comes up in a great number of applications, including the famous Netflix Prize and other similar questions in collaborative filtering. In general, accurate recovery of a matrix from a small number of entries is impossible; but the knowledge that the unknown matrix has low rank radically changes this premise, making the search for solutions meaningful. This paper presents optimality results quantifying the minimum number of entries needed to recover a matrix of rank r exactly by any method whatsoever (the information theoretic limit). More importantly, the paper shows that, under certain incoherence assumptions on the singular vectors of the matrix, recovery is possible by solving a convenient convex program as soon as the number of entries is on the order of the information theoretic limit (up to logarithmic factors). This convex program simply finds, among all matrices consistent with the observed entries, that with minimum nuclear norm. As an example, we show that on the order of nr log(n) samples are needed to recover a random n x n matrix of rank r by any method, and to be sure, nuclear norm minimization succeeds as soon as the number of entries is of the form nr polylog(n)."
]
}
|
0910.0921
|
2951398855
|
We consider a problem of significant practical importance, namely, the reconstruction of a low-rank data matrix from a small subset of its entries. This problem appears in many areas such as collaborative filtering, computer vision and wireless sensor networks. In this paper, we focus on the matrix completion problem in the case when the observed samples are corrupted by noise. We compare the performance of three state-of-the-art matrix completion algorithms (OptSpace, ADMiRA and FPCA) on a single simulation platform and present numerical results. We show that in practice these efficient algorithms can be used to reconstruct real data matrices, as well as randomly generated matrices, accurately.
|
While most theoretical work focus on proving bounds for the exact matrix completion problem, a more interesting and practical problem is when the matrix @math is only approximately low rank or when the observation is corrupted by noise. The main focus of this is to design an algorithm to find an @math low-rank matrix @math that best approximates the original matrix @math and provide a bound on the root mean squared error (RMSE) given by, Cand e s and Plan introduced a generalization of the convex relaxation from @cite_7 to the noisy case, and provided a bound on the RMSE @cite_5 . More recently, a bound on the RMSE achieved by the algorithm with noisy observations was obtained in @cite_12 . This bound is order optimal in a number of situations and improves over the analogous result in @cite_5 . Detailed comparison of these two results are provided in Section .
|
{
"cite_N": [
"@cite_5",
"@cite_12",
"@cite_7"
],
"mid": [
"2952716509",
"2952066970",
"2949959192"
],
"abstract": [
"On the heels of compressed sensing, a remarkable new field has very recently emerged. This field addresses a broad range of problems of significant practical interest, namely, the recovery of a data matrix from what appears to be incomplete, and perhaps even corrupted, information. In its simplest form, the problem is to recover a matrix from a small sample of its entries, and comes up in many areas of science and engineering including collaborative filtering, machine learning, control, remote sensing, and computer vision to name a few. This paper surveys the novel literature on matrix completion, which shows that under some suitable conditions, one can recover an unknown low-rank matrix from a nearly minimal set of entries by solving a simple convex optimization problem, namely, nuclear-norm minimization subject to data constraints. Further, this paper introduces novel results showing that matrix completion is provably accurate even when the few observed entries are corrupted with a small amount of noise. A typical result is that one can recover an unknown n x n matrix of low rank r from just about nr log^2 n noisy samples with an error which is proportional to the noise level. We present numerical results which complement our quantitative analysis and show that, in practice, nuclear norm minimization accurately fills in the many missing entries of large low-rank matrices from just a few noisy samples. Some analogies between matrix completion and compressed sensing are discussed throughout.",
"Given a matrix M of low-rank, we consider the problem of reconstructing it from noisy observations of a small, random subset of its entries. The problem arises in a variety of applications, from collaborative filtering (the Netflix problem') to structure-from-motion and positioning. We study a low complexity algorithm introduced by (2009), based on a combination of spectral techniques and manifold optimization, that we call here OptSpace. We prove performance guarantees that are order-optimal in a number of circumstances.",
"We consider a problem of considerable practical interest: the recovery of a data matrix from a sampling of its entries. Suppose that we observe m entries selected uniformly at random from a matrix M. Can we complete the matrix and recover the entries that we have not seen? We show that one can perfectly recover most low-rank matrices from what appears to be an incomplete set of entries. We prove that if the number m of sampled entries obeys m >= C n^ 1.2 r log n for some positive numerical constant C, then with very high probability, most n by n matrices of rank r can be perfectly recovered by solving a simple convex optimization program. This program finds the matrix with minimum nuclear norm that fits the data. The condition above assumes that the rank is not too large. However, if one replaces the 1.2 exponent with 1.25, then the result holds for all values of the rank. Similar results hold for arbitrary rectangular matrices as well. Our results are connected with the recent literature on compressed sensing, and show that objects other than signals and images can be perfectly reconstructed from very limited information."
]
}
|
0910.0921
|
2951398855
|
We consider a problem of significant practical importance, namely, the reconstruction of a low-rank data matrix from a small subset of its entries. This problem appears in many areas such as collaborative filtering, computer vision and wireless sensor networks. In this paper, we focus on the matrix completion problem in the case when the observed samples are corrupted by noise. We compare the performance of three state-of-the-art matrix completion algorithms (OptSpace, ADMiRA and FPCA) on a single simulation platform and present numerical results. We show that in practice these efficient algorithms can be used to reconstruct real data matrices, as well as randomly generated matrices, accurately.
|
On the practical side, directly solving the convex relaxation introduced in @cite_7 requires solving a Semidefinite Program (SDP), the complexity of which grows proportional to @math . Recently, many authors have proposed efficient algorithms for solving the low-rank matrix completion problem. These include Accelerated Proximal Gradient (APG) algorithm @cite_9 , Fixed Point Continuation with Approximate SVD (FPCA) @cite_0 , Atomic Decomposition for Minimum Rank Approximation ( ADMiRA ) @cite_13 , Soft-Impute @cite_4 , Subspace Evolution and Transfer (SET) @cite_16 , Singular Value Projection (SVP) @cite_3 , and OptSpace @cite_6 . In this paper, we provide numerical comparisons of the performance of three state-of-the-art algorithms, namely, OptSpace , ADMiRA and FPCA, and show that these efficient algorithms can be used to reconstruct real data matrices, as well as randomly generated matrices, accurately.
|
{
"cite_N": [
"@cite_4",
"@cite_7",
"@cite_9",
"@cite_3",
"@cite_6",
"@cite_0",
"@cite_16",
"@cite_13"
],
"mid": [
"",
"2949959192",
"2339666411",
"2951927428",
"2949834189",
"2949351217",
"2949934932",
"2950176463"
],
"abstract": [
"",
"We consider a problem of considerable practical interest: the recovery of a data matrix from a sampling of its entries. Suppose that we observe m entries selected uniformly at random from a matrix M. Can we complete the matrix and recover the entries that we have not seen? We show that one can perfectly recover most low-rank matrices from what appears to be an incomplete set of entries. We prove that if the number m of sampled entries obeys m >= C n^ 1.2 r log n for some positive numerical constant C, then with very high probability, most n by n matrices of rank r can be perfectly recovered by solving a simple convex optimization program. This program finds the matrix with minimum nuclear norm that fits the data. The condition above assumes that the rank is not too large. However, if one replaces the 1.2 exponent with 1.25, then the result holds for all values of the rank. Similar results hold for arbitrary rectangular matrices as well. Our results are connected with the recent literature on compressed sensing, and show that objects other than signals and images can be perfectly reconstructed from very limited information.",
"The a‐ne rank minimization problem, which consists of flnding a matrix of minimum rank subject to linear equality constraints, has been proposed in many areas of engineering and science. A speciflc rank minimization problem is the matrix completion problem, in which we wish to recover a (low-rank) data matrix from incomplete samples of its entries. A recent convex relaxation of the rank minimization problem minimizes the nuclear norm instead of the rank of the matrix. Another possible model for the rank minimization problem is the nuclear norm regularized linear least squares problem. This regularized problem is a special case of an unconstrained nonsmooth convex optimization problem, in which the objective function is the sum of a convex smooth function with Lipschitz continuous gradient and a convex function on a set of matrices. In this paper, we propose an accelerated proximal gradient algorithm, which terminates in O(1= p †) iterations with an †-optimal solution, to solve this unconstrained nonsmooth convex optimization problem, and in particular, the nuclear norm regularized linear least squares problem. We report numerical results for solving large-scale randomly generated matrix completion problems. The numerical results suggest that our algorithm is e‐cient and robust in solving large-scale random matrix completion problems. In particular, we are able to solve random matrix completion problems with matrix dimensions up to 10 5 each in less than 10 minutes on a modest PC.",
"Minimizing the rank of a matrix subject to affine constraints is a fundamental problem with many important applications in machine learning and statistics. In this paper we propose a simple and fast algorithm SVP (Singular Value Projection) for rank minimization with affine constraints (ARMP) and show that SVP recovers the minimum rank solution for affine constraints that satisfy the \"restricted isometry property\" and show robustness of our method to noise. Our results improve upon a recent breakthrough by Recht, Fazel and Parillo (RFP07) and Lee and Bresler (LB09) in three significant ways: 1) our method (SVP) is significantly simpler to analyze and easier to implement, 2) we give recovery guarantees under strictly weaker isometry assumptions 3) we give geometric convergence guarantees for SVP even in presense of noise and, as demonstrated empirically, SVP is significantly faster on real-world and synthetic problems. In addition, we address the practically important problem of low-rank matrix completion (MCP), which can be seen as a special case of ARMP. We empirically demonstrate that our algorithm recovers low-rank incoherent matrices from an almost optimal number of uniformly sampled entries. We make partial progress towards proving exact recovery and provide some intuition for the strong performance of SVP applied to matrix completion by showing a more restricted isometry property. Our algorithm outperforms existing methods, such as those of RFP07,CR08,CT09,CCS08,KOM09,LB09 , for ARMP and the matrix-completion problem by an order of magnitude and is also significantly more robust to noise.",
"Let M be a random (alpha n) x n matrix of rank r<<n, and assume that a uniformly random subset E of its entries is observed. We describe an efficient algorithm that reconstructs M from |E| = O(rn) observed entries with relative root mean square error RMSE <= C(rn |E|)^0.5 . Further, if r=O(1), M can be reconstructed exactly from |E| = O(n log(n)) entries. These results apply beyond random matrices to general low-rank incoherent matrices. This settles (in the case of bounded rank) a question left open by Candes and Recht and improves over the guarantees for their reconstruction algorithm. The complexity of our algorithm is O(|E|r log(n)), which opens the way to its use for massive data sets. In the process of proving these statements, we obtain a generalization of a celebrated result by Friedman-Kahn-Szemeredi and Feige-Ofek on the spectrum of sparse random matrices.",
"The linearly constrained matrix rank minimization problem is widely applicable in many fields such as control, signal processing and system identification. The tightest convex relaxation of this problem is the linearly constrained nuclear norm minimization. Although the latter can be cast as a semidefinite programming problem, such an approach is computationally expensive to solve when the matrices are large. In this paper, we propose fixed point and Bregman iterative algorithms for solving the nuclear norm minimization problem and prove convergence of the first of these algorithms. By using a homotopy approach together with an approximate singular value decomposition procedure, we get a very fast, robust and powerful algorithm, which we call FPCA (Fixed Point Continuation with Approximate SVD), that can solve very large matrix rank minimization problems. Our numerical results on randomly generated and real matrix completion problems demonstrate that this algorithm is much faster and provides much better recoverability than semidefinite programming solvers such as SDPT3. For example, our algorithm can recover 1000 x 1000 matrices of rank 50 with a relative error of 1e-5 in about 3 minutes by sampling only 20 percent of the elements. We know of no other method that achieves as good recoverability. Numerical experiments on online recommendation, DNA microarray data set and image inpainting problems demonstrate the effectiveness of our algorithms.",
"A new algorithm, termed subspace evolution and transfer (SET), is proposed for solving the consistent matrix completion problem. In this setting, one is given a subset of the entries of a low-rank matrix, and asked to find one low-rank matrix consistent with the given observations. We show that this problem can be solved by searching for a column space that matches the observations. The corresponding algorithm consists of two parts -- subspace evolution and subspace transfer. In the evolution part, we use a line search procedure to refine the column space. However, line search is not guaranteed to converge, as there may exist barriers along the search path that prevent the algorithm from reaching a global optimum. To address this problem, in the transfer part, we design mechanisms to detect barriers and transfer the estimated column space from one side of the barrier to the another. The SET algorithm exhibits excellent empirical performance for very low-rank matrices.",
"We address the inverse problem that arises in compressed sensing of a low-rank matrix. Our approach is to pose the inverse problem as an approximation problem with a specified target rank of the solution. A simple search over the target rank then provides the minimum rank solution satisfying a prescribed data approximation bound. We propose an atomic decomposition that provides an analogy between parsimonious representations of a sparse vector and a low-rank matrix. Efficient greedy algorithms to solve the inverse problem for the vector case are extended to the matrix case through this atomic decomposition. In particular, we propose an efficient and guaranteed algorithm named ADMiRA that extends CoSaMP, its analogue for the vector case. The performance guarantee is given in terms of the rank-restricted isometry property and bounds both the number of iterations and the error in the approximate solution for the general case where the solution is approximately low-rank and the measurements are noisy. With a sparse measurement operator such as the one arising in the matrix completion problem, the computation in ADMiRA is linear in the number of measurements. The numerical experiments for the matrix completion problem show that, although the measurement operator in this case does not satisfy the rank-restricted isometry property, ADMiRA is a competitive algorithm for matrix completion."
]
}
|
0910.0777
|
2952057871
|
We study a constrained version of the knapsack problem in which dependencies between items are given by the adjacencies of a graph. In the 1-neighbour knapsack problem, an item can be selected only if at least one of its neighbours is also selected. In the all-neighbours knapsack problem, an item can be selected only if all its neighbours are also selected. We give approximation algorithms and hardness results when the nodes have both uniform and arbitrary weight and profit functions, and when the dependency graph is directed and undirected.
|
There is a tremendous amount of work on maximizing submodular functions under a single knapsack constraint @cite_13 , multiple knapsack constraints @cite_8 , and both knapsack and matroid constraints @cite_12 @cite_1 . While our profit function is submodular, the constraints given by the graph are not characterized by a matroid (our solutions, for example, are not closed downward). Thus, the 1-neighbour knapsack problem represents a class of knapsack problems with realistic constraints that are not captured by previous work.
|
{
"cite_N": [
"@cite_1",
"@cite_13",
"@cite_12",
"@cite_8"
],
"mid": [
"",
"2033885045",
"2026338082",
"123178497"
],
"abstract": [
"",
"In this paper, we obtain an (1-e^-^1)-approximation algorithm for maximizing a nondecreasing submodular set function subject to a knapsack constraint. This algorithm requires O(n^5) function value computations.",
"Submodular function maximization is a central problem in combinatorial optimization, generalizing many important problems including Max Cut in directed undirected graphs and in hypergraphs, certain constraint satisfaction problems, maximum entropy sampling, and maximum facility location problems. Unlike submodular minimization, submodular maximization is NP-hard. In this paper, we give the first constant-factor approximation algorithm for maximizing any non-negative submodular function subject to multiple matroid or knapsack constraints. We emphasize that our results are for non-monotone submodular functions. In particular, for any constant k, we present a (1 k+2+1 k+e)-approximation for the submodular maximization problem under k matroid constraints, and a (1 5-e)-approximation algorithm for this problem subject to k knapsack constraints (e>0 is any constant). We improve the approximation guarantee of our algorithm to 1 k+1+ 1 k-1 +e for k≥2 partition matroid constraints. This idea also gives a ( 1 k+e)-approximation for maximizing a monotone submodular function subject to k≥2 partition matroids, which improves over the previously best known guarantee of 1 k+1.",
"The concept of submodularity plays a vital role in combinatorial optimization. In particular, many important optimization problems can be cast as submodular maximization problems, including maximum coverage, maximum facility location and max cut in directed undirected graphs. In this paper we present the first known approximation algorithms for the problem of maximizing a nondecreasing submodular set function subject to multiple linear constraints. Given a d-dimensional budget vector [EQUATION], for some d ≥ 1, and an oracle for a non-decreasing submodular set function f over a universe U, where each element e ∈ U is associated with a d-dimensional cost vector, we seek a subset of elements S ⊆ U whose total cost is at most [EQUATION], such that f(S) is maximized. We develop a framework for maximizing submodular functions subject to d linear constraints that yields a (1 - e)(1 - e−1)-approximation to the optimum for any e > 0, where d > 1 is some constant. Our study is motivated by a variant of the classical maximum coverage problem that we call maximum coverage with multiple packing constraints. We use our framework to obtain the same approximation ratio for this problem. To the best of our knowledge, this is the first time the theoretical bound of 1 - e−1 is (almost) matched for both of these problems."
]
}
|
0910.0777
|
2952057871
|
We study a constrained version of the knapsack problem in which dependencies between items are given by the adjacencies of a graph. In the 1-neighbour knapsack problem, an item can be selected only if at least one of its neighbours is also selected. In the all-neighbours knapsack problem, an item can be selected only if all its neighbours are also selected. We give approximation algorithms and hardness results when the nodes have both uniform and arbitrary weight and profit functions, and when the dependency graph is directed and undirected.
|
As we show in , the general, undirected 1-neighbour knapsack problem generalizes several maximum coverage problems including the budgeted variant considered by Khuller, Moss, and Naor @cite_3 which has a tight @math -approximation unless P=NP. Our algorithm for the general 1-neighbour problem follows the approach taken by Khuller, Moss, and Naor but, because of the dependency graph, requires several new technical ideas. In particular, our analysis of the greedy step represents a non-trivial generalization of the standard greedy algorithm for submodular maximization.
|
{
"cite_N": [
"@cite_3"
],
"mid": [
"2080379754"
],
"abstract": [
"Abstract The budgeted maximum coverage problem is: given a collection S of sets with associated costs defined over a domain of weighted elements, and a budget L , find a subset of S ′⫅ S such that the total cost of sets in S ′ does not exceed L , and the total weight of elements covered by S ′ is maximized. This problem is NP-hard. For the special case of this problem, where each set has unit cost, a (1−1 e ) -approximation is known. Yet, prior to this work, no approximation results were known for the general cost version. The contribution of this paper is a (1−1 e ) -approximation algorithm for the budgeted maximum coverage problem. We also argue that this approximation factor is the best possible, unless NP ⫅ DTIME (n O ( log log n) ) ."
]
}
|
0910.0777
|
2952057871
|
We study a constrained version of the knapsack problem in which dependencies between items are given by the adjacencies of a graph. In the 1-neighbour knapsack problem, an item can be selected only if at least one of its neighbours is also selected. In the all-neighbours knapsack problem, an item can be selected only if all its neighbours are also selected. We give approximation algorithms and hardness results when the nodes have both uniform and arbitrary weight and profit functions, and when the dependency graph is directed and undirected.
|
Johnson and Niemi @cite_7 give an FPTAS for knapsack problems on dependency graphs that are in-arborescences (these are directed trees in which every arc is directed toward a single root). In their problem formulation, the constraints are given as out-arborescences---directed trees in which every arc is directed away from a single root---and feasible solutions are subsets of vertices that are closed under the predecessor operation. This problem can be viewed as an instance of the general, directed 1-neighbour knapsack problem.
|
{
"cite_N": [
"@cite_7"
],
"mid": [
"2157952725"
],
"abstract": [
"Let G be an acyclic directed graph with weights and values assigned to its vertices. In the partially ordered knapsack problem we wish to find a maximum-valued subset of vertices whose total weight does not exceed a given knapsack capacity, and which contains every predecessor of a vertex if it contains the vertex itself. We consider the special case where G is an out-tree. Even though this special case is still NP-complete, we observe how dynamic programming techniques can be used to construct pseudopolynomial time optimization algorithms and fully polynomial time approximation schemes for it. In particular, we show that a nonstandard approach we call “left-right” dynamic programming is better suited for this problem than the standard “bottom-up” approach, and we show how this “left-right” approach can also be adapted to the case of in-trees and to a related tree partitioning problem arising in integrated circuit design. We conclude by presenting complexity results which indicate that similar success cannot be expected with either problem when the restriction to trees is lifted."
]
}
|
0910.0777
|
2952057871
|
We study a constrained version of the knapsack problem in which dependencies between items are given by the adjacencies of a graph. In the 1-neighbour knapsack problem, an item can be selected only if at least one of its neighbours is also selected. In the all-neighbours knapsack problem, an item can be selected only if all its neighbours are also selected. We give approximation algorithms and hardness results when the nodes have both uniform and arbitrary weight and profit functions, and when the dependency graph is directed and undirected.
|
In the subset-union knapsack problem (SUKP) @cite_9 , each item is a subset of a ground set of elements. Each element in the ground set has a weight and each item has a profit and the goal is to find a maximum-profit set of elements where the weight of the union of the elements in the sets fits in the knapsack. It is easy to see that this is a special case of the general, directed all-neighbours knapsack problem in which there is a vertex for each item and each element and an arc from an item to each element in the item's set. @cite_9 , Kellerer, Pferschy, and Pisinger show that SUKP is NP-hard and give an optimal but badly exponential algorithm. The precedence constrained knapsack problem @cite_0 and partially-ordered knapsack problem @cite_14 are special cases of the general, directed all-neighbours knapsack problem in which the dependency graph is a DAG. Hajiaghayi et. al. show that the partially-ordered knapsack problem is hard to approximate within a @math factor unless 3SAT @math DTIME @math @cite_5 .
|
{
"cite_N": [
"@cite_0",
"@cite_5",
"@cite_9",
"@cite_14"
],
"mid": [
"2141422661",
"",
"1975442866",
"2036199750"
],
"abstract": [
"We consider a knapsack problem with precedence constraints imposed on pairs of items, known as the precedence constrained knapsack problem (PCKP). This problem has applications in manufacturing and mining, and also appears as a subproblem in decomposition techniques for network design and related problems. We present a new approach for determining facets of the PCKP polyhedron based on clique inequalities. A comparison with existing techniques, that lift knapsack cover inequalities for the PCKP, is also presented. It is shown that the clique-based approach generates facets that cannot be found through the existing cover-based approaches, and that the addition of clique-based inequalities for the PCKP can be computationally beneficial, for both PCKP instances arising in real applications, and applications in which PCKP appears as an embedded structure.",
"",
"Throughout the 1960s I worked on combinatorial optimization problems including logic circuit design with Paul Roth and assembly line balancing and the traveling salesman problem with Mike Held. These experiences made me aware that seemingly simple discrete optimization problems could hold the seeds of combinatorial explosions. The work of Dantzig, Fulkerson, Hoffman, Edmonds, Lawler and other pioneers on network flows, matching and matroids acquainted me with the elegant and efficient algorithms that were sometimes possible. Jack Edmonds’ papers and a few key discussions with him drew my attention to the crucial distinction between polynomial-time and superpolynomial-time solvability. I was also influenced by Jack’s emphasis on min-max theorems as a tool for fast verification of optimal solutions, which foreshadowed Steve Cook’s definition of the complexity class NP. Another influence was George Dantzig’s suggestion that integer programming could serve as a universal format for combinatorial optimization problems.",
"In the partially ordered knapsack problem (POK) we are given a set N of items and a partial order @?\"P on N. Each item has a size and an associated weight. The objective is to pack a set N^'@?N of maximum weight in a knapsack of bounded size. N^' should be precedence-closed, i.e., be a valid prefix of @?\"P. POK is a natural generalization, for which very little is known, of the classical Knapsack problem. In this paper we present both positive and negative results. We give an FPTAS for the important case of a two-dimensional partial order, a class of partial orders which is a substantial generalization of the series-parallel class, and we identify the first non-trivial special case for which a polynomial-time algorithm exists. Our results have implications for approximation algorithms for scheduling precedence-constrained jobs on a single machine to minimize the sum of weighted completion times, a problem closely related to POK."
]
}
|
0910.0881
|
2951864014
|
In this work we study the problem of misbehavior detection in wireless networks. A commonly adopted approach is to utilize the broadcasting nature of the wireless medium and have nodes monitor their neighborhood. We call such nodes the Watchdogs. In this paper, we first show that even if a watchdog can overhear all packet transmissions of a flow, any linear operation of the overheard packets can not eliminate miss-detection and is inefficient in terms of bandwidth. We propose a light-weigh misbehavior detection scheme which integrates the idea of watchdogs and error detection coding. We show that even if the watchdog can only observe a fraction of packets, by choosing the encoder properly, an attacker will be detected with high probability while achieving throughput arbitrarily close to optimal. Such properties reduce the incentive for the attacker to attack.
|
Several solutions to address pollution attacks in intra-flow coding systems use special-crafted digital signatures @cite_9 , @cite_11 , @cite_8 , @cite_4 or hash functions @cite_6 , @cite_10 , which have homomorphic properties that allow intermediate nodes to verify the integrity of combined packets. Non-cryptographic solutions have also been proposed @cite_7 , @cite_14 . @cite_12 proposes two practical schemes to address pollution attacks against network coding in wireless mesh networks without requireing complex cryptographic functions and incure little overhead.
|
{
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_7",
"@cite_8",
"@cite_9",
"@cite_6",
"@cite_10",
"@cite_12",
"@cite_11"
],
"mid": [
"2136237463",
"",
"2563871604",
"2153098283",
"",
"",
"",
"2059827048",
"2137771025"
],
"abstract": [
"Network coding substantially increases network throughput. But since it involves mixing of information inside the network, a single corrupted packet generated by a malicious node can end up contaminating all the information reaching a destination, preventing decoding. This paper introduces the first distributed polynomial-time rate-optimal network codes that work in the presence of Byzantine nodes. We present algorithms that target adversaries with different attacking capabilities. When the adversary can eavesdrop on all links and jam zO links , our first algorithm achieves a rate of C - 2zO, where C is the network capacity. In contrast, when the adversary has limited snooping capabilities, we provide algorithms that achieve the higher rate of C - zO. Our algorithms attain the optimal rate given the strength of the adversary. They are information-theoretically secure. They operate in a distributed manner, assume no knowledge of the topology, and can be designed and implemented in polynomial-time. Furthermore, only the source and destination need to be modified; non-malicious nodes inside the network are oblivious to the presence of adversaries and implement a classical distributed network code. Finally, our algorithms work over wired and wireless networks.",
"",
"Distributed randomized network coding, a robust approach to multicasting in distributed network settings, can be extended to provide Byzantine modification detection without the use of cryptographic functions is presented in this paper.",
"Recent research has shown that network coding can be used in content distribution systems to improve the speed of downloads and the robustness of the systems. However, such systems are very vulnerable to attacks by malicious nodes, and we need to have a signature scheme that allows nodes to check the validity of a packet without decoding. In this paper, we propose such a signature scheme for network coding. Our scheme makes use of the linearity property of the packets in a coded system, and allows nodes to check the integrity of the packets received easily. We show that the proposed scheme is secure, and its overhead is negligible for large files.",
"",
"",
"",
"Recent studies show that network coding can provide significant benefits to network protocols, such as increased throughput, reduced network congestion, higher reliability, and lower power consumption. The core principle of network coding is that intermediate nodes actively mix input packets to produce output packets. This mixing subjects network coding systems to a severe security threat, known as a , where attacker nodes inject corrupted packets into the network. Corrupted packets propagate in an epidemic manner, depleting network resources and significantly decreasing throughput. Pollution attacks are particularly dangerous in wireless networks, where attackers can easily inject packets or compromise devices due to the increased network vulnerability. In this paper, we address pollution attacks against network coding systems in wireless mesh networks. We demonstrate that previous solutions to the problem are impractical in wireless networks, incurring an unacceptably high degradation of throughput. We propose a lightweight scheme, DART, that uses time-based authentication in combination with random linear transformations to defend against pollution attacks. We further improve system performance and propose EDART, which enhances DART with an optimistic forwarding scheme. A detailed security analysis shows that the probability of a polluted packet passing our verification procedure is very low. Performance results using the well-known MORE protocol and realistic link quality measurements from the Roofnet experimental testbed show that our schemes improve system performance over 20 times compared to previous solutions.",
"Network coding provides the possibility to maximize network throughput and receives various applications in traditional computer networks, wireless sensor networks and peer-to-peer systems. However, the applications built on top of network coding are vulnerable to pollution attacks, in which the compromised forwarders can inject polluted or forged messages into networks. Existing schemes addressing pollution attacks either require an extra secure channel or incur high computation overhead. In this paper, we propose an efficient signature-based scheme to detect and filter pollution attacks for the applications adopting linear network coding techniques. Our scheme exploits a novel homomorphic signature function to enable the source to delegate its signing authority to forwarders, that is, the forwarders can generate the signatures for their output messages without contacting the source. This nice property allows the forwarders to verify the received messages, but prohibit them from creating the valid signatures for polluted or forged ones. Our scheme does not need any extra secure channels, and can provide source authentication and batch verification. Experimental results show that it can improve computation efficiency up to ten times compared to some existing one. In addition, we present an alternate lightweight scheme based on a much simpler linear signature function. This alternate scheme provides a tradeoff between computation efficiency and security."
]
}
|
0909.4893
|
2953188824
|
We study the non-overlapping indexing problem: Given a text T, preprocess it so that you can answer queries of the form: given a pattern P, report the maximal set of non-overlapping occurrences of P in T. A generalization of this problem is the range non-overlapping indexing where in addition we are given two indexes i,j to report the maximal set of non-overlapping occurrences between these two indexes. We suggest new solutions for these problems. For the non-overlapping problem our solution uses O(n) space with query time of O(m + occ_ NO ). For the range non-overlapping problem we propose a solution with O(n ^ n) space for some 0< <1 and O(m + n + occ_ ij,NO ) query time.
|
Given a text @math of length @math over an alphabet @math , the problem is to build an index on @math which can answer pattern matching queries efficiently: Given a pattern @math of length @math , we want to report all its occurrences in @math . There are some known solutions for this problem. For instance, the , proposed by Weiner @cite_0 , which is a compacted trie storing all suffixes of the text. A suffix tree for text @math of length @math requires @math space and can be built in @math preprocessing time. It has query time of @math where @math is the number of occurrences of @math in @math .
|
{
"cite_N": [
"@cite_0"
],
"mid": [
"2533248932"
],
"abstract": [
"In 1970, Knuth, Pratt, and Morris [1] showed how to do basic pattern matching in linear time. Related problems, such as those discussed in [4], have previously been solved by efficient but sub-optimal algorithms. In this paper, we introduce an interesting data structure called a bi-tree. A linear time algorithm for obtaining a compacted version of a bi-tree associated with a given string is presented. With this construction as the basic tool, we indicate how to solve several pattern matching problems, including some from [4] in linear time."
]
}
|
0909.4893
|
2953188824
|
We study the non-overlapping indexing problem: Given a text T, preprocess it so that you can answer queries of the form: given a pattern P, report the maximal set of non-overlapping occurrences of P in T. A generalization of this problem is the range non-overlapping indexing where in addition we are given two indexes i,j to report the maximal set of non-overlapping occurrences between these two indexes. We suggest new solutions for these problems. For the non-overlapping problem our solution uses O(n) space with query time of O(m + occ_ NO ). For the range non-overlapping problem we propose a solution with O(n ^ n) space for some 0< <1 and O(m + n + occ_ ij,NO ) query time.
|
@cite_2 proposed a solution for a generalization of this problem called the where we want to report the non-overlapping occurrences in a substring of @math . Their solution has query time of @math and uses @math space, where @math is the number of the maximal non-overlapping occurrences in the substring @math .
|
{
"cite_N": [
"@cite_2"
],
"mid": [
"1480681399"
],
"abstract": [
"We present two natural variants of the indexing problem: In the range non-overlapping indexing problem, we preprocess a given text to answer queries in which we are given a pattern, and wish to find a maximal-length sequence of occurrences of the pattern in the text, such that the occurrences do not overlap with one another. While efficiently solving this problem, our algorithm even enables us to efficiently perform so in substrings of the text, denoted by given start and end locations. The methods we supply thus generalize the string statistics problem [4,5], in which we are asked to report merely the number of non-overlapping occurrences in the entire text, by reporting the occurrences themselves, even only for substrings of the text. In the related successive list indexing problem, during query-time we are given a pattern and a list of locations in the preprocessed text. We then wish to find a list of occurrences of the pattern, such that the ith occurrence is the leftmost occurrence of the pattern which starts to the right of the ith location given by the input list. Both problems are solved by using tools from computational geometry, specifically a variation of the range searching for minimum problem of Lenhof and Smid [12], here considered over a grid, in what appears to be the first utilization of range searching for minimum in an indexing-related context."
]
}
|
0909.4893
|
2953188824
|
We study the non-overlapping indexing problem: Given a text T, preprocess it so that you can answer queries of the form: given a pattern P, report the maximal set of non-overlapping occurrences of P in T. A generalization of this problem is the range non-overlapping indexing where in addition we are given two indexes i,j to report the maximal set of non-overlapping occurrences between these two indexes. We suggest new solutions for these problems. For the non-overlapping problem our solution uses O(n) space with query time of O(m + occ_ NO ). For the range non-overlapping problem we propose a solution with O(n ^ n) space for some 0< <1 and O(m + n + occ_ ij,NO ) query time.
|
@cite_3 suggested another solution for the problem. Their solution has optimal query time of @math but requires @math space.
|
{
"cite_N": [
"@cite_3"
],
"mid": [
"2026496324"
],
"abstract": [
"The Range Next Value problem (problem RNV) is a recent interesting variant of the range search problems, where the query is for the immediate next (or equal) value of a given number within a given interval of an array. Problem RNV was introduced and studied very recently by [Maxime Crochemore, Costas S. Iliopoulos, M. Sohel Rahman, Finding patterns in given intervals, in: Antonin Kucera, Ludek Kucera (Eds.), MFCS, 22 in: Lecture Notes in Computer Science, vol. 4708, Springer, 2007, pp. 645-656]. In this paper, we present improved algorithms for problem RNV and algorithms for extended versions of the RNV problem. We also show how this problem can be used to achieve optimal query time for a number of interesting variants of the classic pattern matching problems."
]
}
|
0909.5677
|
1531647055
|
We study the design of mechanisms in combinatorial auction domains. We focus on settings where the auction is repeated, motivated by auctions for licenses or advertising space. We consider models of agent behaviour in which they either apply common learning techniques to minimize the regret of their bidding strategies, or apply short-sighted best-response strategies. We ask: when can a black-box approximation algorithm for the base auction problem be converted into a mechanism that approximately preserves the original algorithm's approximation factor on average over many iterations? We present a general reduction for a broad class of algorithms when agents minimize external regret. We also present a new mechanism for the combinatorial auction problem that attains an @math approximation on average when agents apply best-response dynamics.
|
Truthful mechanisms for the combinatorial auction problem have been extensively studied. For general CAs, Hastad's well-known inapproximability result @cite_5 shows that it is hard to approximate the problem to within @math assuming @math . The best known deterministic truthful mechanism for CAs with general valuations attains an approximation ratio of @math @cite_7 . A randomized @math -approximate mechanism that is truthful in expectation was given by Lavi and Swamy @cite_22 . Dobzinski, Nisan and Schapira @cite_6 then gave an @math -approximate universally truthful randomized mechanism.
|
{
"cite_N": [
"@cite_5",
"@cite_22",
"@cite_7",
"@cite_6"
],
"mid": [
"2087226760",
"2103751307",
"",
"1965161364"
],
"abstract": [
"We prove optimal, up to an arbitrary 2 > 0, inapproximability results for Max-Ek-Sat for k ≥ 3, maximizing the number of satisfied linear equations in an over-determined system of linear equations modulo a prime p and Set Splitting. As a consequence of these results we get improved lower bounds for the efficient approximability of many optimization problems studied previously. In particular, for Max-E2-Sat, Max-Cut, Max-di-Cut, and Vertex cover.",
"We give a general technique to obtain approximation mechanisms that are truthful in expectation. We show that for packing domains, any spl alpha -approximation algorithm that also bounds the integrality gap of the IF relaxation of the problem by a can be used to construct an spl alpha -approximation mechanism that is truthful in expectation. This immediately yields a variety of new and significantly improved results for various problem domains and furthermore, yields truthful (in expectation) mechanisms with guarantees that match the best known approximation guarantees when truthfulness is not required. In particular, we obtain the first truthful mechanisms with approximation guarantees for a variety of multi-parameter domains. We obtain truthful (in expectation) mechanisms achieving approximation guarantees of O( spl radic m) for combinatorial auctions (CAs), (1 + spl epsi ) for multiunit CAs with B = spl Omega (log m) copies of each item, and 2 for multiparameter knapsack problems (multiunit auctions). Our construction is based on considering an LP relaxation of the problem and using the classic VCG mechanism by W. Vickrey (1961), E. Clarke (1971) and T. Groves (1973) to obtain a truthful mechanism in this fractional domain. We argue that the (fractional) optimal solution scaled down by a, where a is the integrality gap of the problem, can be represented as a convex combination of integer solutions, and by viewing this convex combination as specifying a probability distribution over integer solutions, we get a randomized, truthful in expectation mechanism. Our construction can be seen as a way of exploiting VCG in a computational tractable way even when the underlying social-welfare maximization problem is NP-hard.",
"",
"We present a new framework for the design of computationally-efficient and incentive-compatible mechanisms for combinatorial auctions. The mechanisms obtained via this framework are randomized, and obtain incentive compatibility in the universal sense (in contrast to the substantially weaker notion of incentive compatibility in expectation). We demonstrate the usefulness of our techniques by exhibiting two mechanisms for combinatorial auctions with general bidder preferences. The first mechanism obtains an optimal O(m)-approximation to the optimal social welfare for arbitrary bidder valuations. The second mechanism obtains an O(log^2m)-approximation for a class of bidder valuations that contains the important class of submodular bidders. These approximation ratios greatly improve over the best (known) deterministic incentive-compatible mechanisms for these classes."
]
}
|
0909.5677
|
1531647055
|
We study the design of mechanisms in combinatorial auction domains. We focus on settings where the auction is repeated, motivated by auctions for licenses or advertising space. We consider models of agent behaviour in which they either apply common learning techniques to minimize the regret of their bidding strategies, or apply short-sighted best-response strategies. We ask: when can a black-box approximation algorithm for the base auction problem be converted into a mechanism that approximately preserves the original algorithm's approximation factor on average over many iterations? We present a general reduction for a broad class of algorithms when agents minimize external regret. We also present a new mechanism for the combinatorial auction problem that attains an @math approximation on average when agents apply best-response dynamics.
|
The problem of designing combinatorial auction mechanisms that implement approximations at equilibria (and, in particular, Bayes-Nash equilibria for partial information settings) was considered in @cite_9 for submodular CAs, and in @cite_16 for general CA problems. Implementation at equilibrium, especially for the alternative goal of profit maximization, has a rich history in the economics literature; see, for example, Jackson @cite_11 for a survey.
|
{
"cite_N": [
"@cite_9",
"@cite_16",
"@cite_11"
],
"mid": [
"",
"2077121235",
"1977483205"
],
"abstract": [
"",
"We study mechanisms for utilitarian combinatorial allocation problems, where agents are not assumed to be single-minded. This class of problems includes combinatorial auctions, multi-unit auctions, unsplittable flow problems, and others. We focus on the problem of designing mechanisms that approximately optimize social welfare at every Bayes-Nash equilibrium (BNE), which is the standard notion of equilibrium in settings of incomplete information. For a broad class of greedy approximation algorithms, we give a general black-box reduction to deterministic mechanisms with almost no loss to the approximation ratio at any BNE. We also consider the special case of Nash equilibria in full-information games, where we obtain tightened results. This solution concept is closely related to the well-studied price of anarchy. Furthermore, for a rich subclass of allocation problems, pure Nash equilibria are guaranteed to exist for our mechanisms. For many problems, the approximation factors we obtain at equilibrium improve upon the best known results for deterministic truthful mechanisms. In particular, we exhibit a simple deterministic mechanism for general combinatorial auctions that obtains an O(√m) approximation at every BNE.",
"This paper is meant to familiarize the audience with some of the fundamental results in the theory of implementation and provide a quick progression to some open questions in the literature."
]
}
|
0909.5677
|
1531647055
|
We study the design of mechanisms in combinatorial auction domains. We focus on settings where the auction is repeated, motivated by auctions for licenses or advertising space. We consider models of agent behaviour in which they either apply common learning techniques to minimize the regret of their bidding strategies, or apply short-sighted best-response strategies. We ask: when can a black-box approximation algorithm for the base auction problem be converted into a mechanism that approximately preserves the original algorithm's approximation factor on average over many iterations? We present a general reduction for a broad class of algorithms when agents minimize external regret. We also present a new mechanism for the combinatorial auction problem that attains an @math approximation on average when agents apply best-response dynamics.
|
@cite_10 study implementation of algorithms in undominated strategies, which is a relaxation of the dominant strategy truthfulness concept. They focus on a variant of the CA problem in which agents are assumed to have single-value'' valuations, and present a mechanism to implement such auctions in a multi-round fashion. By comparison, mechanisms in our proposed model solve each instance of an auction in a one-shot manner, and our solution concept assumes that the auction is repeated multiple times.
|
{
"cite_N": [
"@cite_10"
],
"mid": [
"2142270691"
],
"abstract": [
"In this article, we are interested in general techniques for designing mechanisms that approximate the social welfare in the presence of selfish rational behavior. We demonstrate our results in the setting of Combinatorial Auctions (CA). Our first result is a general deterministic technique to decouple the algorithmic allocation problem from the strategic aspects, by a procedure that converts any algorithm to a dominant-strategy ascending mechanism. This technique works for any single value domain, in which each agent has the same value for each desired outcome, and this value is the only private information. In particular, for “single-value CAs”, where each player desires any one of several different bundles but has the same value for each of them, our technique converts any approximation algorithm to a dominant strategy mechanism that almost preserves the original approximation ratio. Our second result provides the first computationally efficient deterministic mechanism for the case of single-value multi-minded bidders (with private value and private desired bundles). The mechanism achieves an approximation to the social welfare which is close to the best possible in polynomial time (unless PeNP). This mechanism is an algorithmic implementation in undominated strategies, a notion that we define and justify, and is of independent interest."
]
}
|
0909.3637
|
1587547257
|
We consider algorithms to schedule packets with values and deadlines in a size-bounded buffer. At any time, the buffer can store at most B packets. Packets arrive over time. Each packet has a non-negative value and an integer deadline. In each time step, at most one packet can be sent. Packets can be dropped at any time before they are sent. The objective is to maximize the total value gained by delivering packets no later than their respective deadlines. This model generalizes the well-studied bounded-delay model (Hajek. CISS 2001. STOC 2001). We first provide an optimal offline algorithm for this model. Then we present analternative proof of the 2-competitive deterministic online algorithm (Fung. arXiv July 2009). We also prove that the lower bound of competitive ratio of a family of (deterministic and randomized) algorithms is 2 1=B.
|
The bounded buffer model is studied by Li @cite_22 . Its generalization, called the multi-buffer model , is considered by Azar and Levy @cite_2 . @cite_22 , a @math -competitive deterministic algorithm and a ( @math )-competitive randomized algorithms are given. Fung @cite_0 provides a @math -competitive deterministic algorithm and in this paper, we present an alternative proof. Azar and Levy @cite_2 provide a @math -competitive deterministic algorithm, which also works for the multi-buffer model. For the multi-buffer model, Li @cite_20 improves the competitive ratio to @math .
|
{
"cite_N": [
"@cite_0",
"@cite_22",
"@cite_20",
"@cite_2"
],
"mid": [
"2953352566",
"2155749359",
"1544216698",
"1537647641"
],
"abstract": [
"We study the problem of buffer management in QoS-enabled network switches in the bounded delay model where each packet is associated with a weight and a deadline. We consider the more realistic situation where the network switch has a finite buffer size. A 9.82-competitive algorithm is known for the case of multiple buffers (Azar and Levy, SWAT'06). Recently, for the case of a single buffer, a 3-competitive deterministic algorithm and a 2.618-competitive randomized algorithm was known (Li, INFOCOM'09). In this paper we give a simple deterministic 2-competitive algorithm for the case of a single buffer.",
"Motivated by the quality-of-service (QoS) buffer management problem, we consider online scheduling of packets with hard deadlines in a finite capacity queue. At any time, a queue can store at most b isin Z + packets. Packets arrive over time. Each packet is associated with a non-negative value and an integer deadline. In each time step, only one packet is allowed to be sent. Our objective is to maximize the total value gained by the packets sent by their deadlines in an online manner. Due to the Internet traffic's chaotic characteristics, no stochastic assumptions are made on the packet input sequences. This model is called a finite-queue model. We use competitive analysis to measure an online algorithm's performance versus an unrealizable optimal offline algorithm who constructs the worst possible input based on the knowledge of the online algorithm. For the finite-queue model, we first present a deterministic 3-competitive memoryless online algorithm. Then, we give a randomized (Phi 2 = (1+radic(5) 2) 2 ap 2.618)-competitive memoryless online algorithm. The algorithmic framework and its theoretical analysis include several interesting features. First, our algorithms use (possibly) modified characteristics of packets; these characteristics may not be same as those specified in the input sequence. Second, our analysis method is different from the classical potential function approach. We use a simple charging scheme, which depends on a clever modification (during the course of the algorithm) on the packets in the queue of the optimal offline algorithm. We then prove that a set of invariants holds at the end of each time step. Finally, we analyze the two proposed algorithm in a relaxed model, in which packets have no hard deadlines but an order. We conclude that both algorithms have the same competitive ratios in the relaxed model.",
"Motivated by providing differentiated services in the Internet, we consider online buffer management algorithms for quality-of-service network switches. We study a multi-buffer model . Packets have values and deadlines; they arrive at a switch over time. The switch consists of multiple buffers whose sizes are bounded. In each time step, only one pending packet can be sent. Our objective is to maximize the total value of the packets sent by their deadlines. We employ competitive analysis to measure an online algorithm's performance. In this paper, we first show that the lower bound of competitive ratio of a broad family of online algorithms is 2. Then we propose a ( @math )-competitive deterministic algorithm, which is improved from the previously best-known result 9.82 (Azar and Levy. SWAT 2006).",
"We study the online problem of multiplexing packets with arbitrary deadlines in bounded multi-buffer switch. In this model, a switch consists of m input buffers each with bounded capacity B and one output port. Each arriving packet is associated with a value and a deadline that specifies the time limit till the packet can be transmitted. At each time step the switch can select any non-empty buffer and transmit one packet from that buffer. In the preemptive model, stored packets may be preempted from their buffers due to lack of buffer space or discarded due to the violation of the deadline constraints. If preemption is not allowed, every packet accepted and stored in the buffer must be transmitted before its deadline has expired. The goal is to maximize the benefit of the packets transmitted by their deadlines. To date, most models for packets with deadlines assumed a single buffer. To the best of our knowledge this is the first time a bounded multi-buffer switch is used with arbitrary deadline constraints Our main result is a 9.82-competitive deterministic algorithm for packets with arbitrary values and deadlines. Note that the greedy algorithm is not competitive. For the non-preemptive model we present a 2-competitive deterministic algorithm for the unit value packets. For arbitrary values we present a randomized algorithm whose competitiveness is logarithmic in the ratio between the largest and the smallest value of the packets in the sequence"
]
}
|
0909.3688
|
2951846779
|
Web-fraud is one of the most unpleasant features of today's Internet. Two well-known examples of fraudulent activities on the web are phishing and typosquatting. Their effects range from relatively benign (such as unwanted ads) to downright sinister (especially, when typosquatting is combined with phishing). This paper presents a novel technique to detect web-fraud domains that utilize HTTPS. To this end, we conduct the first comprehensive study of SSL certificates. We analyze certificates of legitimate and popular domains and those used by fraudulent ones. Drawing from extensive measurements, we build a classifier that detects such malicious domains with high accuracy.
|
User studies analyzing the effectiveness of browser warning messages indicated that an overwhelming percentage (up to 70-80 ---confirmed by recent research results (e.g., @cite_33 and @cite_2 )--- might explain the unexpected high percentage of expired and self signed certificates that we found in the results of all our data sets. In @cite_42 , the authors proposed a solution to simplify the process (for the users) of authenticating the servers' public keys (or SSL certificates) by deploying a set of semi-trusted collection of network servers that continuously probed the servers and collected their public keys (or SSL certificates). When the user was exposed to a new public key (SSL certificate), he referred to these semi-trusted servers to verify the authenticity of public keys (SSL certificates).
|
{
"cite_N": [
"@cite_42",
"@cite_33",
"@cite_2"
],
"mid": [
"2161954933",
"1550000763",
"2099889974"
],
"abstract": [
"The popularity of \"Trust-on-first-use\" (Tofu) authentication, used by SSH and HTTPS with self-signed certificates, demonstrates significant demand for host authentication that is low-cost and simple to deploy. While Tofu-based applications are a clear improvement over completely insecure protocols, they can leave users vulnerable to even simple network attacks. Our system, PERSPECTIVES, thwarts many of these attacks by using a collection of \"notary\" hosts that observes a server's public key via multiple network vantage points (detecting localized attacks) and keeps a record of the server's key over time (recognizing short-lived attacks). Clients can download these records on-demand and compare them against an unauthenticated key, detecting many common attacks. PERSPECTIVES explores a promising part of the host authentication design space: Trust-on-first-use applications gain significant attack robustness without sacrificing their ease-of-use. We also analyze the security provided by PERSPECTIVES and describe our experience building and deploying a publicly available implementation.",
"Web users are shown an invalid certificate warning when their browser cannot validate the identity of the websites they are visiting. While these warnings often appear in benign situations, they can also signal a man-in-the-middle attack. We conducted a survey of over 400 Internet users to examine their reactions to and understanding of current SSL warnings. We then designed two new warnings using warnings science principles and lessons learned from the survey. We evaluated warnings used in three popular web browsers and our two warnings in a 100- participant, between-subjects laboratory study. Our warnings performed significantly better than existing warnings, but far too many participants exhibited dangerous behavior in all warning conditions. Our results suggest that, while warnings can be improved, a better approach may be to minimize the use of SSL warnings altogether by blocking users from making unsafe connections and eliminating warnings in benign situations.",
"Many popular web browsers are now including active phishing warnings after previous research has shown that passive warnings are often ignored. In this laboratory study we examine the effectiveness of these warnings and examine if, how, and why they fail users. We simulated a spear phishing attack to expose users to browser warnings. We found that 97 of our sixty participants fell for at least one of the phishing messages that we sent them. However, we also found that when presented with the active warnings, 79 of participants heeded them, which was not the case for the passive warning that we tested---where only one participant heeded the warnings. Using a model from the warning sciences we analyzed how users perceive warning messages and offer suggestions for creating more effective warning messages within the phishing context."
]
}
|
0909.2733
|
1507298155
|
An ancestry labeling scheme assigns labels (bit strings) to the nodes of rooted trees such that ancestry queries between any two nodes in a tree can be answered merely by looking at their corresponding labels. The quality of an ancestry labeling scheme is measured by its label size, that is the maximal number of bits in a label of a tree node. In addition to its theoretical appeal, the design of efficient ancestry labeling schemes is motivated by applications in web search engines. For this purpose, even small improvements in the label size are important. In fact, the literature about this topic is interested in the exact label size rather than just its order of magnitude. As a result, following the proposal of a simple interval-based ancestry scheme with label size @math bits (, STOC '88), a considerable amount of work was devoted to improve the bound on the size of a label. The current state of the art upper bound is @math bits (, SODA '02) which is still far from the known @math bits lower bound (, SODA '03). In this paper we close the gap between the known lower and upper bounds, by constructing an ancestry labeling scheme with label size @math bits. In addition to the optimal label size, our scheme assigns the labels in linear time and can support any ancestry query in constant time.
|
As explained in @cite_0 , the names of nodes in traditional graph representations reveal no information about the graph structure and hence memory is wasted. Moreover, typical representations are usually global in nature, i.e., in order to derive useful information, one must access a global data structure representing the entire network, even if the sought information is local, pertaining to only a few nodes. In contrast, the notion of informative labeling schemes , introduced in @cite_0 , involves an informative method for assigning labels to nodes. Specifically, the assignment is made in a way that allows one to infer information regarding any two nodes directly from their labels, without using any additional information sources. Hence in essence, this method bases the entire representation on the set of labels alone. This method was illustrated in @cite_0 , by giving two elegant and simple labeling schemes for @math -node trees: one supporting adjacency queries and the other supporting ancestry queries. Both schemes incur @math label size.
|
{
"cite_N": [
"@cite_0"
],
"mid": [
"2060963839"
],
"abstract": [
"How to represent a graph in memory is a fundamental data structuring question. In the usual representations of an n-vertex graph, the names of the vertices (i.e., integers from 1 to n) betray nothing about the graph itself. Indeed, the names (or labels) on the n vertices are just @math bit place holders to allow data on the edges to encode the structure of the graph. In this scenario, there is no such waste. By assigning @math bit labels to the vertices, the structure of the graph is completely encoded, so that, given the labels of two vertices, one can test if they are adjacent in time linear in the size of the labels. Furthermore, given an arbitrary original labeling of the vertices, structure coding labels are found (as above) that are no more than a small constant factor larger than the original labels. These notions are intimately related to vertex-induced universal graphs of polynomial size. For example, planar graphs can be labeled with structure coding labels of size @math , which i..."
]
}
|
0909.2894
|
2950152649
|
Downlink spatial intercell interference cancellation (ICIC) is considered for mitigating other-cell interference using multiple transmit antennas. A principle question we explore is whether it is better to do ICIC or simply standard single-cell beamforming. We explore this question analytically and show that beamforming is preferred for all users when the edge SNR (signal-to-noise ratio) is low ( @math dB), for example in an urban setting. At medium SNR, a proposed adaptive strategy, where multiple base stations jointly select transmission strategies based on the user location, outperforms both while requiring a lower feedback rate than the pure ICIC approach. The employed metric is sum rate, which is normally a dubious metric for cellular systems, but surprisingly we show that even with this reward function the adaptive strategy also improves fairness. When the channel information is provided by limited feedback, the impact of the induced quantization error is also investigated. It is shown that ICIC with well-designed feedback strategies still provides significant throughput gain.
|
Coordinated multicell transmission, also called , has recently drawn significant attention. In a network MIMO system, multiple coordinated BSs effectively form a super BS'', which transforms an interference channel into a MIMO broadcast channel, with a per-BS power constraint @cite_6 @cite_31 @cite_23 . The optimal dirty paper coding (DPC) @cite_7 @cite_11 and sub-optimal linear precoders have been developed for network MIMO @cite_33 @cite_13 @cite_4 @cite_28 @cite_8 @cite_42 . With simplified network models, analytical results have appeared in @cite_53 @cite_38 @cite_20 @cite_35 .
|
{
"cite_N": [
"@cite_38",
"@cite_35",
"@cite_4",
"@cite_33",
"@cite_7",
"@cite_8",
"@cite_28",
"@cite_53",
"@cite_42",
"@cite_6",
"@cite_23",
"@cite_31",
"@cite_13",
"@cite_20",
"@cite_11"
],
"mid": [
"2135096483",
"1991331367",
"2172277720",
"2106612364",
"1976109068",
"2093763487",
"",
"1587524632",
"2108662631",
"2032760019",
"",
"2153339777",
"2139771702",
"1983731331",
"2030546921"
],
"abstract": [
"For a multiple-input single-output (MISO) down- link channel with M transmit antennas, it has been recently proved that zero-forcing beamforming (ZFBF) to a subset of (at most) M \"semi-orthogonal\" users is optimal in terms of the sum-rate, asymptotically with the number of users. However, determining the subset of users for transmission is a complex optimization problem. Adopting the ZFBF scheme in a cooper- ative multi-cell scenario renders the selection process even more difficult since more users are involved. In this paper, we consider a multi-cell cooperative ZFBF scheme combined with a simple sub-optimal users selection procedure for the Wyner downlink channel setup. According to this sub-optimal procedure, the user with the \"best\" local channel is selected for transmission in each cell. It is shown that under an overall power constraint, a distributed multi-cell ZFBF to this sub-optimal subset of users achieves the same sum-rate growth rate as an optimal scheme deploying joint multi-cell dirty-paper coding (DPC) techniques, asymptotically with the number of users per cell. Moreover, the overall power constraint is shown to ensure in probability, equal per-cell power constraints when the number of users per-cell increases.",
"We study the potential benefits of base-station (BS) cooperation for downlink transmission in multicell networks. Based on a modified Wyner-type model with users clustered at the cell-edges, we analyze the dirty-paper-coding (DPC) precoder and several linear precoding schemes, including cophasing, zero-forcing (ZF), and MMSE precoders. For the nonfading scenario with random phases, we obtain analytical performance expressions for each scheme. In particular, we characterize the high signal-to-noise ratio (SNR) performance gap between the DPC and ZF precoders in large networks, which indicates a singularity problem in certain network settings. Moreover, we demonstrate that the MMSE precoder does not completely resolve the singularity problem. However, by incorporating path gain fading, we numerically show that the singularity problem can be eased by linear precoding techniques aided with multiuser selection. By extending our network model to include cell-interior users, we determine the capacity regions of the two classes of users for various cooperative strategies. In addition to an outer bound and a baseline scheme, we also consider several locally cooperative transmission approaches. The resulting capacity regions show the tradeoff between the performance improvement and the requirement for BS cooperation, signal processing complexity, and channel state information at the transmitter (CSIT).",
"We address the problem of providing the best possible service to new users joining a multicellular multiple antenna system without affecting existing users. Since, interference-wise, new users are invisible to existing users, the network is dubbed Phantom Net.",
"A linear pre-processing plus encoding scheme is proposed, which significantly enhances cellular downlink performance, while putting the complexity burden on the transmitting end. The approach is based on LQ factorization of the channel transfer matrix combined with the \"writing on dirty paper\" approach (Caire, G. and Shamai, S., Proc. 38th Annual Allerton Conference on Communication, Control and Computing, 2000) for eliminating the effect of uncorrelated interference, which is fully known at the transmitter but unknown at the receiver. The attainable average rates with the proposed scheme approach those of optimum joint processing at the high SNR region.",
"A channel with output Y = X + S + Z is examined, The state S N(0, QI) and the noise Z N(0, NI) are multivariate Gaussian random variables ( I is the identity matrix.). The input X R^ n satisfies the power constraint (l n) i=1 ^ n X_ i ^ 2 P . If S is unknown to both transmitter and receiver then the capacity is 1 2 (1 + P ( N + Q)) nats per channel use. However, if the state S is known to the encoder, the capacity is shown to be C^ = 1 2 (1 + P N) , independent of Q . This is also the capacity of a standard Gaussian channel with signal-to-noise power ratio P N . Therefore, the state S does not affect the capacity of the channel, even though S is unknown to the receiver. It is shown that the optimal transmitter adapts its signal to the state S rather than attempting to cancel it.",
"Intercell interference limits the capacity of wireless networks. To mitigate this interference we explore coherently coordinated transmission (CCT) from multiple base stations to each user. To treat users fairly, we explore equal rate (ER) networks. We evaluate the downlink network efficiency of CCT as compared to serving each user with single base transmission (SBT) with a separate base uniquely assigned to each user. Efficiency of ER networks is measured as total network throughput relative to the number of network antennas at 10 user outage. Efficiency is compared relative to the baseline of single base transmission with power control, (ER-SBT), where base antenna transmissions are not coordinated and apart from power control and the assignment of 10 of the users to outage, nothing is done to mitigate interference. We control the transmit power of ER systems to maximise the common rate for ER-SBT, ER-CCT based on zero forcing, and ER-CCT employing dirty paper coding. We do so for (no. of transmit antennas per base, no. of receive antennas per user) equal to (1,1), (2,2) and (4,4). We observe that CCT mutes intercell interference enough, so that enormous spectral efficiency improvement associated with using multiple antennas in isolated communication links occurs as well for the base-to-user links in a cellular network.",
"",
"This paper explores the fundamental downlink performance limits of multicell networks when cooperation is allowed between all the base station in the network. Toward this end, we infer the asymptotic sum rate of a capacity-achieving technique (known as dirty paper coding) for the broadcast channel in the limit of a large number of transmit antennas and single-antenna users. The users have unequal SNRs characterized by a doubly-regular gain matrix and we show that this condition can be satisfied in a finite cellular network of hexagonal cells where users are placed on corners of hexagons centered on the bases. Under this placement, we can compute the asymptotic rate per user when dirty paper coding is used across base stations in a coordinated network. We extend the analysis to an infinite coordinated network and show that its performance is lower-bounded by that of a single isolated cell. Performance comparisons are also made with conventional cellular networks where coordination among bases is not allowed.",
"A clustered base transceiver station (BTS) coordination strategy is proposed for a large cellular MIMO network, which includes full intra-cluster coordination-to enhance the sum rate-and limited inter-cluster coordination-to reduce interference for the cluster edge users. Multi-cell block diagonalization is used to coordinate the transmissions across multiple BTSs in the same cluster. To satisfy per-BTS power constraints, three combined precoder and power allocation algorithms are proposed with different performance and complexity tradeoffs. For inter-cluster coordination, the coordination area is chosen to balance fairness for edge users and the achievable sum rate. It is shown that a small cluster size (about 7 cells) is sufficient to obtain most of the sum rate benefits from clustered coordination while greatly relieving channel feedback requirement. Simulations show that the proposed coordination strategy efficiently reduces interference and provides a considerable sum rate gain for cellular MIMO networks.",
"We consider the downlink transmission of a wireless communication system where M antennas transmit independent information to a subset of K users, each equipped with a single antenna. The Shannon capacity of this MIMO broadcast channel (MIMO-BC) can be achieved using a non-linear preceding technique known as dirty paper coding (DPC) which is difficult to implement in practice. Motivated to study simpler transmission techniques, we focus on a linear precoding technique based on the zero-forcing (ZF) algorithm. In contrast to the typical sum power constraint (SPC), we consider a per-antenna power constraint (PAPC) motivated both by current antenna array designs where each antenna is powered by a separate amplifier and by future wireless networks where spatially separated antennas transmit cooperatively to users. We show that the problem of power allocation for maximizing the weighted sum rate under ZF with PAPC is a constrained convex optimization problem that can be solved using conventional numerical optimization techniques. For the special case of two users, we find an analytic solution based on waterfilling techniques. For the case where the number of users increases without bound, we show that ZF with PAPC is asymptotically optimal in the sense that the ratio of the expected sum-rate capacities between ZF with PAPC and DPC with SPC approaches one. We also show how the results can be generalized for multiple frequency bands and for a hybrid power constraint. Finally, we provide numerical results that show ZF with PAPC achieves a significant fraction of the optimum DPC sum-rate capacity in practical cases where K is bounded",
"",
"We investigate optimum zero-forcing beamforming in multiple antenna broadcast channels with per-antenna power constraints. We show that standard zero-forcing techniques, such as the Moore-Penrose pseudo-inverse, considered mainly in the context of sum-power constrained systems are suboptimal when there are per-antenna power constraints. We formulate convex optimization problems to find the optimum zero-forcing beamforming vectors. Our results indicate that optimizing the antenna outputs based on the per-antenna constraints may improve the rate considerably when the number of transmit antennas is larger the number of receive antennas. Having more transmit antennas gives rise to additional signal space dimensions that may be exploited effectively to reduce transmit power at particular antennas with limited power budget.",
"Recently, the remarkable capacity potential of multiple-input multiple-output (MIMO) wireless communication systems was unveiled. The predicted enormous capacity gain of MIMO is nonetheless significantly limited by cochannel interference (CCI) in realistic cellular environments. The previously proposed advanced receiver technique improves the system performance at the cost of increased receiver complexity, and the achieved system capacity is still significantly away from the interference-free capacity upper bound, especially in environments with strong CCI. In this paper, base station cooperative processing is explored to address the CCI mitigation problem in downlink multicell multiuser MIMO networks, and is shown to dramatically increase the capacity with strong CCI. Both information-theoretic dirty paper coding approach and several more practical joint transmission schemes are studied with pooled and practical per-base power constraints, respectively. Besides the CCI mitigation potential, other advantages of cooperative processing including the power gain, channel rank conditioning advantage, and macrodiversity protection are also addressed. The potential of our proposed joint transmission schemes is verified with both heuristic and realistic cellular MIMO settings.",
"Scaling results for the sum capacity of the multiple access, uplink channel are provided for a flat-fading environment, with multiple-input-multiple-output (MIMO) links, when there is interference from other cells. The classical MIMO scaling regime is considered in which the number of antennas per user and per base station grow large together. Utilizing the known characterizations of the limiting eigenvalue distributions of large random matrices, the asymptotic behavior of the sum capacity of the system is characterized for an architecture in which the base stations cooperate in the joint decoding process of all users (macrodiversity). This asymptotic sum capacity is compared with that of the conventional scenario in which the base stations only decode the users in their cells. For the case of base station cooperation, an interesting \"resource pooling\" phenomenon is observed: in some cases, the limiting performance of a macrodiversity multiuser network has the same asymptotic behavior as that of a single-user MIMO link with an equivalent amount of pooled received power. This resource pooling phenomenon allows us to derive an elegant closed-form expression for the sum capacity of a new version of Wyner's classical model of a cellular network, in which MIMO links are incorporated into the model.",
"The Gaussian multiple-input multiple-output (MIMO) broadcast channel (BC) is considered. The dirty-paper coding (DPC) rate region is shown to coincide with the capacity region. To that end, a new notion of an enhanced broadcast channel is introduced and is used jointly with the entropy power inequality, to show that a superposition of Gaussian codes is optimal for the degraded vector broadcast channel and that DPC is optimal for the nondegraded case. Furthermore, the capacity region is characterized under a wide range of input constraints, accounting, as special cases, for the total power and the per-antenna power constraints"
]
}
|
0909.2894
|
2950152649
|
Downlink spatial intercell interference cancellation (ICIC) is considered for mitigating other-cell interference using multiple transmit antennas. A principle question we explore is whether it is better to do ICIC or simply standard single-cell beamforming. We explore this question analytically and show that beamforming is preferred for all users when the edge SNR (signal-to-noise ratio) is low ( @math dB), for example in an urban setting. At medium SNR, a proposed adaptive strategy, where multiple base stations jointly select transmission strategies based on the user location, outperforms both while requiring a lower feedback rate than the pure ICIC approach. The employed metric is sum rate, which is normally a dubious metric for cellular systems, but surprisingly we show that even with this reward function the adaptive strategy also improves fairness. When the channel information is provided by limited feedback, the impact of the induced quantization error is also investigated. It is shown that ICIC with well-designed feedback strategies still provides significant throughput gain.
|
In practice, the major challenges for network MIMO concern complexity and overhead. For example, the requirement for CSI grows in proportion to the number of BS antennas, the number of BSs, and the number of users. The complexity of joint processing also grows with the network size. To limit the complexity and CSI requirements, cluster-based coordination is one approach @cite_26 @cite_52 @cite_12 @cite_42 . To reduce the complexity, distributed decoding and beamforming for network MIMO systems were proposed in @cite_36 @cite_46 @cite_0 . In @cite_17 @cite_51 , BS coordination with hybrid channel knowledge was investigated, where each BS has full information of its own CSI and statistical information of other BSs' channels. Limited backhaul capacity @cite_47 @cite_5 and synchronization @cite_37 @cite_32 have also been treated to some extent. A WiMAX based implementation of network MIMO was done in @cite_9 , for both uplink and downlink in the indoor environment.
|
{
"cite_N": [
"@cite_47",
"@cite_26",
"@cite_37",
"@cite_36",
"@cite_9",
"@cite_42",
"@cite_52",
"@cite_32",
"@cite_0",
"@cite_5",
"@cite_46",
"@cite_51",
"@cite_12",
"@cite_17"
],
"mid": [
"2101653184",
"",
"2156262554",
"2016224438",
"1977037692",
"2108662631",
"2094481065",
"2147486330",
"2098995249",
"2030956801",
"2114620279",
"",
"2154813399",
"2123264012"
],
"abstract": [
"It has recently been shown that multi-cell cooperations in cellular networks, enabling distributed antenna systems and joint transmission or joint detection across cell boundaries, can significantly increase capacity, especially that of users at cell borders. Such concepts, typically implicitly assuming unlimited information exchange between base stations, can also be used to increase the network fairness. In practical implementations, however, the large amounts of received signals that need to be quantized and transmitted via an additional backhaul between the involved cells to central processing points, will be a non-negligible issue. In this paper, we thus introduce an analytical framework to observe the uplink performance of cellular networks in which joint detection is only applied to a subset of selected users, aiming at achieving best possible capacity and fairness improvements under a strongly constrained backhaul between sites. This reveals a multi-dimensional optimization problem, where we propose a simple, heuristic algorithm that strongly narrows down and serializes the problem while still yielding a significant performance improvement.",
"",
"Cooperative transmission by base stations (BSs) can significantly improve the spectral efficiency of multiuser, multi-cell, multiple input multiple output (MIMO) systems. We show that contrary to what is often assumed in the literature, the multiuser interference in such systems is fundamentally asynchronous. Intuitively, perfect timing-advance mechanisms can at best only ensure that the desired signal components -but not also the interference components -are perfectly aligned at their intended mobile stations. We develop an accurate mathematical model for the asynchronicity, and show that it leads to a significant performance degradation of existing designs that ignore the asynchronicity of interference. Using three previously proposed linear preceding design methods for BS cooperation, we develop corresponding algorithms that are better at mitigating the impact of the asynchronicity of the interference. Furthermore, we also address timing-advance inaccuracies (jitter), which are inevitable in a practical system. We show that using jitter-statistics-aware precoders can mitigate the impact of these inaccuracies as well. The insights of this paper are critical for the practical implementation of BS cooperation in multiuser MIMO systems, a topic that is typically oversimplified in the literature.",
"This paper considers the problem of joint detection in the uplink of cellular multiaccess networks with base-station cooperation. Distributed multiuser detection algorithms with local message passing among neighbor base stations are proposed and compared in terms of computational complexity required in the base stations, the amount of serial communications among them, error rate performance, and convergence speed. The algorithms based on the belief propagation algorithm result in complexity and delay per base station which do not grow as the network size increases. In addition, it is observed that these algorithms have near single-user error rate performance for the fading channels considered. Thus it is illustrated that using the belief propagation algorithm, it is possible to have full frequency re-use and achieve near-optimal performance with moderate computational complexity and a limited amount of message passing between base stations of adjacent cells.",
"It is well known that multiple-input multiple-output (MIMO) techniques can bring numerous benefits, such as higher spectral efficiency, to point-to-point wireless links. More recently, there has been interest in extending MIMO concepts to multiuser wireless systems. Our focus in this paper is on network MIMO, a family of techniques whereby each end user in a wireless access network is served through several access points within its range of influence. By tightly coordinating the transmission and reception of signals at multiple access points, network MIMO can transcend the limits on spectral efficiency imposed by cochannel interference. Taking prior information-theoretic analyses of network MIMO to the next level, we quantify the spectral efficiency gains obtainable under realistic propagation and operational conditions in a typical indoor deployment. Our study relies on detailed simulations and, for specificity, is conducted largely within the physical-layer framework of the IEEE 802.16e Mobile WiMAX system. Furthermore, to facilitate the coordination between access points, we assume that a high-capacity local area network, such as Gigabit Ethernet, connects all the access points. Our results confirm that network MIMO stands to provide a multiple-fold increase in spectral efficiency under these conditions.",
"A clustered base transceiver station (BTS) coordination strategy is proposed for a large cellular MIMO network, which includes full intra-cluster coordination-to enhance the sum rate-and limited inter-cluster coordination-to reduce interference for the cluster edge users. Multi-cell block diagonalization is used to coordinate the transmissions across multiple BTSs in the same cluster. To satisfy per-BTS power constraints, three combined precoder and power allocation algorithms are proposed with different performance and complexity tradeoffs. For inter-cluster coordination, the coordination area is chosen to balance fairness for edge users and the achievable sum rate. It is shown that a small cluster size (about 7 cells) is sufficient to obtain most of the sum rate benefits from clustered coordination while greatly relieving channel feedback requirement. Simulations show that the proposed coordination strategy efficiently reduces interference and provides a considerable sum rate gain for cellular MIMO networks.",
"We investigate the downlink throughput of cellular systems where groups of M antennas - either co-located or spatially distributed - transmit to a subset of a total population of K > M users in a coherent, coordinated fashion in order to mitigate intercell interference. We consider two types of coordination: the capacity-achieving technique based on dirty paper coding (DPC), and a simpler technique based on zero-forcing (ZF) beamforming with per-antenna power constraints. During a given frame, a scheduler chooses the subset of the K users in order to maximize the weighted sum rate, where the weights are based on the proportional-fair scheduling algorithm. We consider the weighted average sum throughput among K users per cell in a multi-cell network where coordination is limited to a neighborhood of M antennas. Consequently, the performance of both systems is limited by interference from antennas that are outside of the M coordinated antennas. Compared to a 12-sector baseline which uses the same number of antennas per cell site, the throughput of ZF and DPC achieve respective gains of 1.5 and 1.75.",
"We consider synchronization techniques required to enhance the cellular network capacity using base station cooperation. In the physical layer, local oscillators are disciplined by the global positioning system (GPS) and over the backbone network for outdoor and indoor base stations, respectively. In the medium access control (MAC) layer, the data flow can be synchronized by two approaches. The first approach uses so-called time stamps. The data flow through the user plane and through copies of it in each cooperative base station is synchronized using a timing protocol on the interconnects between the base stations. The second approach adds mapping information to the data after the user plane processing is almost finalized. Each forward-error encoded transport block, its modulation and coding scheme and the resources where it will be transmitted are multicast over the interconnect network. Interconnect latency is reduced below 1 ms to enable coherent interference reduction for mobile radio channels.",
"In this paper, we consider multicell processing on the downlink of a cellular network to accomplish ldquomacrodiversityrdquo transmit beamforming. The particular downlink beamformer structure we consider allows a recasting of the downlink beamforming problem as a virtual linear mean square error (LMMSE) estimation problem. We exploit the structure of the channel and develop distributed beamforming algorithms using local message passing between neighboring base stations. For 1-D networks, we use the Kalman smoothing framework to obtain a forward-backward beamforming algorithm. We also propose a limited extent version of this algorithm that shows that the delay need not grow with the size of the network in practice. For 2-D cellular networks, we remodel the network as a factor graph and present a distributed beamforming algorithm based on the sum-product algorithm. Despite the presence of loops in the factor graph, the algorithm produces optimal results if convergence occurs.",
"In this contribution we present new achievable rates, for the non-fading uplink channel of a cellular network, with joint cell-site processing, where unlike previous results, the error-free backhaul network has finite capacity per-cell. Namely, the cell-sites are linked to the central joint processor via lossless links with finite capacity. The cellular network is modeled by the circular Wyner model, which yields closed form expressions for the achievable rates. For this idealistic model, we present achievable rates for cell-sites that use compress-and forward scheme, combined with local decoding, and inter-cell time-sharing. These rates are then demonstrated to be rather close to the optimal unlimited backhaul joint processing rates, already for modest backhaul capacities, supporting the potential gain offered by the joint cell-site processing approach.",
"We consider the problem of multiuser detection in cellular networks. In particular, we present a distributed forward-backward algorithm with local message passing for efficient implementation of the linear minimum mean square error (LMMSE) receiver, for a simple model of a 1D cellular system. The distributed algorithm is based on the well-known interpretation of Kalman smoothing as a linear combination of the forward and backward filtered estimates. We also show that near-optimal performance can be achieved by only relying on information from a local linear segment of the entire array. This results in a limited extent distributed algorithm that greatly reduces processing delay, especially for large networks, yet with little loss in performance.",
"",
"Multi-cell cooperative processing (MCP) has recently attracted a lot of attention because of its potential for co-channel interference (CCI) mitigation and spectral efficiency increase. MCP inevitably requires increased signaling overhead and inter-base communication. Therefore in practice, only a limited number of base stations (BSs) can cooperate in order for the overhead to be affordable. The intrinsic problem of which BSs shall cooperate in a realistic scenario has been only partially investigated. In this contribution linear beamforming has been considered for the sum-rate maximisation of the uplink. A novel dynamic greedy algorithm for the formation of the clusters of cooperating BSs is presented for a cellular network incorporating MCP. This approach is chosen to be evaluated under a fair MS scheduling scenario (round-robin). The objective of the clustering algorithm is sum-rate maximisation of the already selected MSs. The proposed cooperation scheme is compared with some fixed cooperation clustering schemes. It is shown that a dynamic clustering approach with a cluster consisting of 2 cells outperforms static coordination schemes with much larger cluster sizes.",
"This paper explores the idea of cooperative spatial multiplexing for use in MIMO multicell networks. We imagine applying this cooperation for several multiple antenna access-points to jointly transmit streams towards multiple single-antenna user terminals in neighbouring cells. We make the setting more realistic by introducing a constraint on the hybrid channel state information (HCSI), assuming that each transmitter has full CSI for its own channel, but only statistical information about other transmitters’ channels. Each cooperating transmitter then makes guesses about the behaviour of the other transmitters, using the statistical CSI. We show two of several possible transmission strategies under this setting, and include simple optimization at the receiver to improve performance. Comparisons are made with fully cooperative (full CSI) and non-cooperative schemes. Simulation results show a substantial cooperation gain despite the lack of instantaneous information."
]
}
|
0909.2894
|
2950152649
|
Downlink spatial intercell interference cancellation (ICIC) is considered for mitigating other-cell interference using multiple transmit antennas. A principle question we explore is whether it is better to do ICIC or simply standard single-cell beamforming. We explore this question analytically and show that beamforming is preferred for all users when the edge SNR (signal-to-noise ratio) is low ( @math dB), for example in an urban setting. At medium SNR, a proposed adaptive strategy, where multiple base stations jointly select transmission strategies based on the user location, outperforms both while requiring a lower feedback rate than the pure ICIC approach. The employed metric is sum rate, which is normally a dubious metric for cellular systems, but surprisingly we show that even with this reward function the adaptive strategy also improves fairness. When the channel information is provided by limited feedback, the impact of the induced quantization error is also investigated. It is shown that ICIC with well-designed feedback strategies still provides significant throughput gain.
|
Coordinated single-cell transmission, where the traffic data for each user comes from a single BS, is of lower complexity, requires less inter-BS information exchange, and has lower CSI requirements. Intercell scheduling has been shown to be able to expand multiuser diversity gain versus static frequency planning @cite_39 , while coordinated load balancing and intercell scheduling were investigated in @cite_18 @cite_19 . Multi-cell power control algorithms were proposed in @cite_2 @cite_22 . The use of multiple antennas to suppress OCI has also been investigated as a coordinated single-cell transmission strategy, mainly in the form of receive combining. Optimal signal combining for space diversity reception with cochannel interference in cellular networks was proposed in @cite_15 @cite_29 . In @cite_1 @cite_41 , spatial interference cancellation with multiple receive antennas has been exploited in ad hoc networks, which bear some similarity to multicell networks. Receive combining, however, can be applicable mainly in the uplink, as there are usually multiple antennas at the BS but only a small number of antennas at the mobile. Downlink beamforming in multicell scenarios was investigated in @cite_25 @cite_27 , with the objective of minimizing the transmit power to support required receive SINR constraints at mobiles.
|
{
"cite_N": [
"@cite_18",
"@cite_22",
"@cite_41",
"@cite_29",
"@cite_1",
"@cite_39",
"@cite_19",
"@cite_27",
"@cite_2",
"@cite_15",
"@cite_25"
],
"mid": [
"2130827801",
"1971386973",
"",
"2021573106",
"2161275813",
"2128192883",
"1975618234",
"",
"",
"2033860388",
""
],
"abstract": [
"Third generation code-division multiple access (CDMA) systems propose to provide packet data service through a high speed shared channel with intelligent and fast scheduling at the base-stations. In the current approach base-stations schedule independently of other base-stations. We consider scheduling schemes in which scheduling decisions are made jointly for a cluster of cells thereby enhancing performance through interference avoidance and dynamic load balancing. We consider algorithms that assume complete knowledge of the channel quality information from each of the base-stations to the terminals at the centralized scheduler as well as a two-tier scheduling strategy that assumes only the knowledge of the long term channel conditions at the centralized scheduler. We demonstrate that in the case of asymmetric traffic distribution, where load imbalance is most pronounced, significant throughput gains can be obtained while the gains in the symmetric case are modest. Since the load balancing is achieved through centralized scheduling, our scheme can adapt to time-varying traffic patterns dynamically.",
"We address the problem of multicell co-channel scheduling in view of mitigating interference in a wireless data network with full spectrum reuse. The centralized joint multicell scheduling optimization problem, based on the complete co-channel gain information, has so far been justly considered impractical due to complexity and real-time cell-to-cell signaling overhead. However, we expose here the following remarkable result for a large network with a standard power control policy. The capacity maximizing joint multicell scheduling problem admits a simple and fully distributed solution. This result is proved analytically for an idealized network. From the constructive proof, we propose a practical algorithm that is shown to achieve near maximum capacity for realistic cases of simulated networks of even small sizes.",
"",
"For a broad class of interference-dominated wireless systems including mobile, personal communications, and wireless PBX LAN networks, the authors show that a significant increase in system capacity can be achieved by the use of spatial diversity (multiple antennas), and optimum combining. This is explained by the following observation: for independent flat-Rayleigh fading wireless systems with N mutually interfering users, they demonstrate that with K+N antennas, N-1 interferers can be nulled out and K+1 path diversity improvement can be achieved by each of the N users. Monte Carlo evaluations show that these results also hold with frequency-selective fading when optimum equalization is used at the receiver. Thus an N-fold increase in user capacity can be achieved, allowing for modular growth and improved performance by increasing the number of antennas. The interferers can also be users in other cells, users in other radio systems, or even other types of radiating devices, and thus interference cancellation also allows radio systems to operate in high interference environments. As an example of the potential system gain, the authors show that with 2 or 3 antennas the capacity of the mobile radio system IS-54 can be doubled, and with 5 antennas a 7-fold capacity increase (frequency reuse in every cell) can be achieved. >",
"The benefit of multiple antenna communication is investigated in wireless ad hoc networks, and the primary finding is that throughput can be made to scale linearly with the number of receive antennas even if each transmitting node uses only a single antenna. The linear throughput gain is achieved by (i) using the receive antennas to cancel the signals of nearby interferers as well as to increase signal power (i.e., for array gain), and (ii) maintaining a constant per-link rate and increasing the spatial density of simultaneous transmissions linearly with the number of antennas at each receiver. Numerical results show that even a few receive antennas provide substantial throughput gains, thereby illustrating that the asymptotic linear scaling result is also indicative of performance for reasonable numbers of antennas.",
"The capacity and robustness of cellular MIMO systems is very sensitive to other-cell interference which will in practice necessitate network level interference reduction strategies. As an alternative to traditional static frequency reuse patterns, this paper investigates intercell scheduling among neighboring base stations. We show analytically that cooperatively scheduled transmission, which is well within the capability of present systems, can achieve an expanded multiuser diversity gain in terms of ergodic capacity as well as almost the same amount of interference reduction as conventional frequency reuse. This capacity gain over conventional frequency reuse is O (M t square-root of log Ns) for dirty paper coding and O (min (Mr, Mt) square-root of log Ns) for time division, where Ns is the number of cooperating base stations employing opportunistic scheduling in an M t x M r MIMO system. From a theoretical standpoint, an interesting aspect of this analysis comes from an altered view of multiuser diversity in the context of a multi-cell system. Previously, multiuser diversity capacity gain has been known to grow as O(log log K), from selecting the maximum of K exponentially-distributed powers. Because multicell considerations such as the positions of the users, lognormal shadowing, and pathless affect the multiuser diversity gain, we find instead that the gain is O(square-root of 2logic K), from selecting the maximum of a compound Iognormal-exponential distribution. Finding the maximum of such a distribution is an additional contribution of the paper.",
"We investigate a wireless system of multiple cells, each having a downlink shared channel in support of high-speed packet data services. In practice, such a system consists of hierarchically organized entities including a central server, Base Stations (BSs), and Mobile Stations (MSs). Our goal is to improve global resource utilization and reduce regional congestion given asymmetric arrivals and departures of mobile users, a goal requiring load balancing among multiple cells. For this purpose, we propose a scalable cross-layer framework to coordinate packet-level scheduling, call-level cell-site selection and handoff, and system-level cell coverage based on load, throughput, and channel measurements. In this framework, an opportunistic scheduling algorithm--the weighted Alpha-Rule--exploits the gain of multiuser diversity in each cell independently, trading aggregate (mean) down-link throughput for fairness and minimum rate guarantees among MSs. Each MS adapts to its channel dynamics and the load fluctuations in neighboring cells, in accordance with MSs' mobility or their arrival and departure, by initiating load-aware handoff and cell-site selection. The central server adjusts schedulers of all cells to coordinate their coverage by prompting cell breathing or distributed MS handoffs. Across the whole system, BSs and MSs constantly monitor their load, throughput, or channel quality in order to facilitate the overall system coordination. Our specific contributions in such a framework are highlighted by the minimum-rate guaranteed weighted Alpha-Rule scheduling, the load-aware MS handoff cell-site selection, and the Media Access Control (MAC)-layer cell breathing. Our evaluations show that the proposed framework can improve global resource utilization and load balancing, resulting in a smaller blocking rate of MS arrivals without extra resources while the aggregate throughput remains roughly the same or improved at the hot-spots. Our simulation tests also show that the coordinated system is robust to dynamic load fluctuations and is scalable to both the system dimension and the size of MS population.",
"",
"",
"This paper studies optimum signal combining for space diversity reception in cellular mobile radio systems. With optimum combining, the signals received by the antennas are weighted and combined to maximize the output signal-to-interference-plus-noise ratio. Thus, with cochannel interference, space diversity is used not only to combat Rayleigh fading of the desired signal (as with maximal ratio combining) but also to reduce the power of interfering signals at the receiver. We use analytical and computer simulation techniques to determine the performance of optimum combining when the received desired and interfering signals are subject to Rayleigh fading. Results show that optimum combining is significantly better than maximal ratio combining even when the number of interferers is greater than the number of antennas. Results for typical cellular mobile radio systems show that optimum combining increases the output signal-to-interference ratio at the receiver by several decibels. Thus, systems can require fewer base station antennas and or achieve increased channel capacity through greater frequency reuse. We also describe techniques for implementing optimum combining with least mean square (LMS) adaptive arrays.",
""
]
}
|
0909.3146
|
2952972470
|
Systems exploiting network coding to increase their throughput suffer greatly from pollution attacks which consist of injecting malicious packets in the network. The pollution attacks are amplified by the network coding process, resulting in a greater damage than under traditional routing. In this paper, we address this issue by designing an unconditionally secure authentication code suitable for multicast network coding. The proposed scheme is robust against pollution attacks from outsiders, as well as coalitions of malicious insiders. Intermediate nodes can verify the integrity and origin of the packets received without having to decode, and thus detect and discard the malicious messages in-transit that fail the verification. This way, the pollution is canceled out before reaching the destinations. We analyze the performance of the scheme in terms of both multicast throughput and goodput, and show the goodput gains. We also discuss applications to file distribution.
|
Several authentication schemes have been recently proposed in the literature to detect polluted packets at intermediate nodes @cite_19 @cite_18 @cite_4 @cite_9 @cite_6 . All of them are based on cryptographic functions with computational assumptions, as detailed below. The scheme in @cite_19 for network-coded content distribution allows intermediate nodes to detect malicious packets injected in the network and to alert neighboring nodes when a malicious packet is detected. It uses a homomorphic hash function to generate hash values of the encoded blocks of data that are then sent to the intermediate nodes and destinations prior to the encoded data. The transmission of these hash values is performed over a pre-established secure channel which makes the scheme impractical. The use of hash functions makes the scheme fall into the category of computationally secure schemes.
|
{
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_9",
"@cite_6",
"@cite_19"
],
"mid": [
"2130093830",
"2137771025",
"",
"1743877615",
"2162219773"
],
"abstract": [
"This paper presents a practical digital signature scheme to be used in conjunction with network coding. Our scheme simultaneously provides authentication and detects malicious nodes that intentionally corrupt content on the network.",
"Network coding provides the possibility to maximize network throughput and receives various applications in traditional computer networks, wireless sensor networks and peer-to-peer systems. However, the applications built on top of network coding are vulnerable to pollution attacks, in which the compromised forwarders can inject polluted or forged messages into networks. Existing schemes addressing pollution attacks either require an extra secure channel or incur high computation overhead. In this paper, we propose an efficient signature-based scheme to detect and filter pollution attacks for the applications adopting linear network coding techniques. Our scheme exploits a novel homomorphic signature function to enable the source to delegate its signing authority to forwarders, that is, the forwarders can generate the signatures for their output messages without contacting the source. This nice property allows the forwarders to verify the received messages, but prohibit them from creating the valid signatures for polluted or forged ones. Our scheme does not need any extra secure channels, and can provide source authentication and batch verification. Experimental results show that it can improve computation efficiency up to ten times compared to some existing one. In addition, we present an alternate lightweight scheme based on a much simpler linear signature function. This alternate scheme provides a tradeoff between computation efficiency and security.",
"",
"Network coding offers increased throughput and improved robustness to random faults in completely decentralized networks. In contrast to traditional routing schemes, however, network coding requires intermediate nodes to modify data packets en route ; for this reason, standard signature schemes are inapplicable and it is a challenge to provide resilience to tampering by malicious nodes. We propose two signature schemes that can be used in conjunction with network coding to prevent malicious modification of data. Our schemes can be viewed as signing linear subspaces in the sense that a signature *** on a subspace V authenticates exactly those vectors in V . Our first scheme is (suitably) homomorphic and has constant public-key size and per-packet overhead. Our second scheme does not rely on random oracles and is based on weaker assumptions. We also prove a lower bound on the length of signatures for linear subspaces showing that our schemes are essentially optimal in this regard.",
"Peer-to-peer content distribution networks can suffer from malicious participants that corrupt content. Current systems verify blocks with traditional cryptographic signatures and hashes. However, these techniques do not apply well to more elegant schemes that use network coding techniques for efficient content distribution. Architectures that use network coding are prone to jamming attacks where the introduction of a few corrupted blocks can quickly result in a large number of bad blocks propagating through the system. Identifying such bogus blocks is difficult and requires the use of homomorphic hashing functions, which are computationally expensive. This paper presents a practical security scheme for network coding that reduces the cost of verifying blocks on-the-fly while efficiently preventing the propagation of malicious blocks. In our scheme, users not only cooperate to distribute the content, but (well-behaved) users also cooperate to protect themselves against malicious users by informing affected nodes when a malicious block is found. We analyze and study such cooperative security scheme and introduce elegant techniques to prevent DoS attacks. We show that the loss in the efficiency caused by the attackers is limited to the effort the attackers put to corrupt the communication, which is a natural lower bound in the damage of the system. We also show experimentally that checking as low as 1-5 of the received blocks is enough to guarantee low corruption rates."
]
}
|
0909.3146
|
2952972470
|
Systems exploiting network coding to increase their throughput suffer greatly from pollution attacks which consist of injecting malicious packets in the network. The pollution attacks are amplified by the network coding process, resulting in a greater damage than under traditional routing. In this paper, we address this issue by designing an unconditionally secure authentication code suitable for multicast network coding. The proposed scheme is robust against pollution attacks from outsiders, as well as coalitions of malicious insiders. Intermediate nodes can verify the integrity and origin of the packets received without having to decode, and thus detect and discard the malicious messages in-transit that fail the verification. This way, the pollution is canceled out before reaching the destinations. We analyze the performance of the scheme in terms of both multicast throughput and goodput, and show the goodput gains. We also discuss applications to file distribution.
|
The signature scheme in @cite_18 is a homomorphic signature scheme based on Weil pairing over elliptic curves, while the one proposed in @cite_4 is a homomorphic signature scheme based on RSA. For both schemes, intermediate nodes can authenticate the packets in transit without decoding, and generate a verifiable signature of the packet that they have just encoded without knowing the signer's secret key. However, these schemes require one key pair for each file to be verified, which is not practical either.
|
{
"cite_N": [
"@cite_18",
"@cite_4"
],
"mid": [
"2130093830",
"2137771025"
],
"abstract": [
"This paper presents a practical digital signature scheme to be used in conjunction with network coding. Our scheme simultaneously provides authentication and detects malicious nodes that intentionally corrupt content on the network.",
"Network coding provides the possibility to maximize network throughput and receives various applications in traditional computer networks, wireless sensor networks and peer-to-peer systems. However, the applications built on top of network coding are vulnerable to pollution attacks, in which the compromised forwarders can inject polluted or forged messages into networks. Existing schemes addressing pollution attacks either require an extra secure channel or incur high computation overhead. In this paper, we propose an efficient signature-based scheme to detect and filter pollution attacks for the applications adopting linear network coding techniques. Our scheme exploits a novel homomorphic signature function to enable the source to delegate its signing authority to forwarders, that is, the forwarders can generate the signatures for their output messages without contacting the source. This nice property allows the forwarders to verify the received messages, but prohibit them from creating the valid signatures for polluted or forged ones. Our scheme does not need any extra secure channels, and can provide source authentication and batch verification. Experimental results show that it can improve computation efficiency up to ten times compared to some existing one. In addition, we present an alternate lightweight scheme based on a much simpler linear signature function. This alternate scheme provides a tradeoff between computation efficiency and security."
]
}
|
0909.3146
|
2952972470
|
Systems exploiting network coding to increase their throughput suffer greatly from pollution attacks which consist of injecting malicious packets in the network. The pollution attacks are amplified by the network coding process, resulting in a greater damage than under traditional routing. In this paper, we address this issue by designing an unconditionally secure authentication code suitable for multicast network coding. The proposed scheme is robust against pollution attacks from outsiders, as well as coalitions of malicious insiders. Intermediate nodes can verify the integrity and origin of the packets received without having to decode, and thus detect and discard the malicious messages in-transit that fail the verification. This way, the pollution is canceled out before reaching the destinations. We analyze the performance of the scheme in terms of both multicast throughput and goodput, and show the goodput gains. We also discuss applications to file distribution.
|
The signature scheme proposed in @cite_5 uses a standard signature scheme based on the hardness of the discrete logarithm problem. The blocks of data are considered as vectors spanning a subspace. The signature is not performed on vectors containing data blocks, but on vectors orthogonal to all data vectors in the given subspace. The signature verification allows to check if the received vector belongs to the data subspace. The security of their scheme holds in that no adversary knowing a signature on a given subspace of data vectors is able to forge a valid signature for any vector not in this given subspace. This scheme requires also fresh keys for every file. Finally, the signature schemes given in @cite_6 follow the approach given in @cite_5 with improvements in terms of public key size and per-packet overhead. The signature schemes proposed are designed to authenticate a linear subspace formed by the vectors containing data blocks. Signatures on a linear subspace are sufficient to authenticate all the vectors in this same subspace. With these schemes, a single public key can be used to verify multiple files.
|
{
"cite_N": [
"@cite_5",
"@cite_6"
],
"mid": [
"2153098283",
"1743877615"
],
"abstract": [
"Recent research has shown that network coding can be used in content distribution systems to improve the speed of downloads and the robustness of the systems. However, such systems are very vulnerable to attacks by malicious nodes, and we need to have a signature scheme that allows nodes to check the validity of a packet without decoding. In this paper, we propose such a signature scheme for network coding. Our scheme makes use of the linearity property of the packets in a coded system, and allows nodes to check the integrity of the packets received easily. We show that the proposed scheme is secure, and its overhead is negligible for large files.",
"Network coding offers increased throughput and improved robustness to random faults in completely decentralized networks. In contrast to traditional routing schemes, however, network coding requires intermediate nodes to modify data packets en route ; for this reason, standard signature schemes are inapplicable and it is a challenge to provide resilience to tampering by malicious nodes. We propose two signature schemes that can be used in conjunction with network coding to prevent malicious modification of data. Our schemes can be viewed as signing linear subspaces in the sense that a signature *** on a subspace V authenticates exactly those vectors in V . Our first scheme is (suitably) homomorphic and has constant public-key size and per-packet overhead. Our second scheme does not rely on random oracles and is based on weaker assumptions. We also prove a lower bound on the length of signatures for linear subspaces showing that our schemes are essentially optimal in this regard."
]
}
|
0909.3257
|
2950986816
|
Much work has been devoted, during the past twenty years, to using complexity to protect elections from manipulation and control. Many results have been obtained showing NP-hardness shields, and recently there has been much focus on whether such worst-case hardness protections can be bypassed by frequently correct heuristics or by approximations. This paper takes a very different approach: We argue that when electorates follow the canonical political science model of societ al preferences the complexity shield never existed in the first place. In particular, we show that for electorates having single-peaked preferences, many existing NP-hardness results on manipulation and control evaporate.
|
The paper that inspired our work is Walsh's Uncertainty in Preference Elicitation and Aggregation'' @cite_39 . Among other things, in that paper he raises the issue of manipulation in single-peaked societies. Our paper follows his model of assuming society's linear ordering of the candidates is given and that manipulative voters must be single-peaked with respect to that ordering. However, our theme and his differ. His manipulation results present cases where single-peakedness leaves an @math -completeness shield intact. In particular, for both the constructive and the destructive cases, he shows that the coalition weighted manipulation problem for the single transferable vote election rule for three or more candidates remains @math -hard in the single-peaked case. Although our Theorem follows this path of seeing where shields remain intact for single-peaked preferences, the central focus of our paper is that single-peaked preferences often remove complexity shields on manipulation and control. Walsh's paper for a different issue---looking at incomplete profiles and asking whether some all the completions make a candidate a winner---proves both @math results and @math -completeness results. We're greatly indebted to his paper for raising and exploring the issue of manipulation for single-peaked electorates.
|
{
"cite_N": [
"@cite_39"
],
"mid": [
"108432896"
],
"abstract": [
"Uncertainty arises in preference aggregation in several ways. There may, for example, be uncertainty in the votes or the voting rule. Such uncertainty can introduce computational complexity in determining which candidate or candidates can or must win the election. In this paper, we survey recent work in this area and give some new results. We argue, for example, that the set of possible winners can be computationally harder to compute than the necessary winner. As a second example, we show that, even if the unknown votes are assumed to be single-peaked, it remains computationally hard to compute the possible and necessary winners, or to manipulate the election."
]
}
|
0909.3257
|
2950986816
|
Much work has been devoted, during the past twenty years, to using complexity to protect elections from manipulation and control. Many results have been obtained showing NP-hardness shields, and recently there has been much focus on whether such worst-case hardness protections can be bypassed by frequently correct heuristics or by approximations. This paper takes a very different approach: We argue that when electorates follow the canonical political science model of societ al preferences the complexity shield never existed in the first place. In particular, we show that for electorates having single-peaked preferences, many existing NP-hardness results on manipulation and control evaporate.
|
As mentioned in the main text, Bartholdi and Trick @cite_27 , Doignon and Falmagne @cite_37 , and Escoffier, Lang, and "O zt "u rk @cite_1 have provided efficient algorithms for testing single-peakedness and producing a valid candidate linear ordering, for the case when votes are linear orders.
|
{
"cite_N": [
"@cite_27",
"@cite_1",
"@cite_37"
],
"mid": [
"2042544885",
"",
"2004142419"
],
"abstract": [
"We study a special case of the Stable Roommates problem in which preferences are derived from a psychological model common in social choice literature. When preferences are 'single-peaked' and 'narcissistic', there exists a unique stable matching, and it can be constructed in O(n) time. We also show how to recognize quickly, when a set of preferences is single-peaked.",
"",
"Abstract Two conditions on a collection of simple orders - unimodality and straightness - are necessary but not jointly sufficient for unidimensional unfolding representations. From the analysis of these conditions, a polynomial time algorithm is derived for the testing of unidimensionality and for the construction of a representation when one exists."
]
}
|
0909.3445
|
2949330618
|
We present a method for grouping the synonyms of a lemma according to its dictionary senses. The senses are defined by a large machine readable dictionary for French, the TLFi (Tr 'esor de la langue fran c aise informatis 'e) and the synonyms are given by 5 synonym dictionaries (also for French). To evaluate the proposed method, we manually constructed a gold standard where for each (word, definition) pair and given the set of synonyms defined for that word by the 5 synonym dictionaries, 4 lexicographers specified the set of synonyms they judge adequate. While inter-annotator agreement ranges on that task from 67 to at best 88 depending on the annotator pair and on the synonym dictionary being considered, the automatic procedure we propose scores a precision of 67 and a recall of 71 . The proposed method is compared with related work namely, word sense disambiguation, synonym lexicon acquisition and WordNet construction.
|
It would in principle be possible to use an unsupervised approach and attempt to disambiguate synonyms on the basis of raw corpora. Such approaches however are not based on a fixed list of senses where the senses for a target word are a closed list coming from a dictionary. Instead they induce word senses directly from the corpus by using clustering techniques, which group together similar examples. To associate synonyms with definitions, it would therefore be necessary to define an additional mapping between corpus induced word senses and dictionary definitions. As noted in @cite_14 , such a mapping usually introduces noise and information loss however.
|
{
"cite_N": [
"@cite_14"
],
"mid": [
"1974976142"
],
"abstract": [
"Veronis (2004) has recently proposed an innovative unsupervised algorithm for word sense disambiguation based on small-world graphs called HyperLex. This paper explores two sides of the algorithm. First, we extend Veronis' work by optimizing the free parameters (on a set of words which is different to the target set). Second, given that the empirical comparison among unsupervised systems (and with respect to supervised systems) is seldom made, we used hand-tagged corpora to map the induced senses to a standard lexicon (WordNet) and a publicly available gold standard (Senseval 3 English Lexical Sample). Our results for nouns show that thanks to the optimization of parameters and the mapping method, HyperLex obtains results close to supervised systems using the same kind of bag-of-words features. Given the information loss inherent in any mapping step and the fact that the parameters were tuned for another set of words, these are very interesting results."
]
}
|
0909.3445
|
2949330618
|
We present a method for grouping the synonyms of a lemma according to its dictionary senses. The senses are defined by a large machine readable dictionary for French, the TLFi (Tr 'esor de la langue fran c aise informatis 'e) and the synonyms are given by 5 synonym dictionaries (also for French). To evaluate the proposed method, we manually constructed a gold standard where for each (word, definition) pair and given the set of synonyms defined for that word by the 5 synonym dictionaries, 4 lexicographers specified the set of synonyms they judge adequate. While inter-annotator agreement ranges on that task from 67 to at best 88 depending on the annotator pair and on the synonym dictionary being considered, the automatic procedure we propose scores a precision of 67 and a recall of 71 . The proposed method is compared with related work namely, word sense disambiguation, synonym lexicon acquisition and WordNet construction.
|
Like work on synonym extraction, the WOLF approach differs from ours in that synonyms are automatically extracted from linguistic data (i.e., a parallel corpus and the Balkanet WordNets) rather than taken from a set of existing synonym dictionaries thereby introducing errors in the synsets. @cite_5 @cite_8 report a precision of 63.2 is that our approach associates synsets with a French definition (from the ) rather than an English one (from the Princeton WordNet via the synset identifier). A third difference is that we do not map definitions to a Princeton WordNet synset identifier and therefore cannot reconstruct a network of lexical relations between synsets. More generally, the two approaches are complementary in that ours provides the seeds for a merge construction of a French WordNet whilst @cite_5 @cite_8 pursue an extend approach.
|
{
"cite_N": [
"@cite_5",
"@cite_8"
],
"mid": [
"6275383",
"1568246863"
],
"abstract": [
"This paper describes automatic construction a freely-available wordnet for French (WOLF) based on Princeton WordNet (PWN) by using various multilingual resources. Polysemous words were dealt with an approach in which a parallel corpus for five languages was word-aligned and the extracted multilingual lexicon was disambiguated with the existing wordnets for these languages. On the other hand, a bilingual approach sufficed to acquire equivalents for monosemous words. Bilingual lexicons were extracted from Wikipedia and thesauri. The results obtained from each resource were merged and ranked according to the number of resources yielding the same literal. Automatic evaluation of the merged wordnet was performed with the French WordNet (FREWN). Manual evaluation was also carried out on a sample of the generated synsets. Precision shows that the presented approach has proved to be very promising and applications to use the created wordnet are already intended.",
"This paper compares automatically generated sets of synonyms in French and Slovene wordnets with respect to the resources used in the construction process. Polysemous words were disambiguated via a five-language word-alignment of the SEERA.NET parallel corpus, a subcorpus of the JRC Acquis. The extracted multilingual lexicon was disambiguated with the existing wordnets for these languages. On the other hand, a bilingual approach sufficed to acquire equivalents for monosemous words. Bilingual lexicons were extracted from different resources, including Wikipedia, Wiktionary and EUROVOC thesaurus. A representative sample of the generated synsets was evaluated against the goldstandards."
]
}
|
0909.2005
|
1600996865
|
We present a deterministic algorithm that given a tree T with n vertices, a starting vertex v and a slackness parameter epsilon > 0, estimates within an additive error of epsilon the cover and return time, namely, the expected time it takes a simple random walk that starts at v to visit all vertices of T and return to v. The running time of our algorithm is polynomial in n epsilon, and hence remains polynomial in n also for epsilon = 1 n^ O(1) . We also show how the algorithm can be extended to estimate the expected cover (without return) time on trees.
|
Much of previous work dealt with the cover time from the worst possible starting vertex in the graph. In this case, the maximum hitting time serves as a lower bound on the cover time. Moreover, as shown by Matthews @cite_4 , the cover time can exceed the maximum hitting time by a factor of at most @math . Hence the hitting time (which is computable in deterministic polynomial time) provides a @math approximation to the cover time. An extension of this approach leads to an algorithm with a better approximation ratio of @math @cite_9 . An approach of upper bounding the cover time based on spanning trees is presented in @cite_2 . In particular, when it is applied to trees it implies that the cover and return time is at most @math (which is attained for a path with @math vertices), and for general graphs it gives an upper bound of @math (which can be improved to essentially @math with more careful analysis @cite_7 ). For some graphs, this approach based on spanning trees gives a very good approximation of the cover time.
|
{
"cite_N": [
"@cite_9",
"@cite_4",
"@cite_7",
"@cite_2"
],
"mid": [
"2136645553",
"2068008593",
"2170807097",
"2048572907"
],
"abstract": [
"We prove upper and lower bounds and give an approximation algorithm for the cover time of the random walk on a graph. We introduce a parameter M motivated by the well-known Matthews bounds (P. Matthews, 1988) on the cover time, C, and prove that M 2<C= O(M(lnlnn) sup 2 ). We give a deterministic-polynomial time algorithm to approximate M within a factor of 2; this then approximates C within a factor of O((lnlnn) sup 2 ), improving the previous bound O(lnn) due to Matthews. The blanket time B was introduced by P. Winkler and D. Zuckerman (1996): it is the expectation of the first time when all vertices are visited within a constant factor of the number of times suggested by the stationary distribution. Obviously C spl les B. Winkler and Zuckerman conjectured B=O(C) and proved B=O(Clnn). Our bounds above are also valid for the blanket time, and so it follows that B=O(C(lnlnn) sup 2 ).",
"On donne des bornes superieures et inferieures sur la fonction generatrice des moments du temps pris par une chaine de Markov pour visiter au moins n des N sous-ensembles selectionnes de son espace d'etats",
"We prove that the expected time for a random walk to cover all n vertices of a graph is at least (1 + o(1))n In n. © 1995 Wiley Periodicals, Inc.",
""
]
}
|
0909.2005
|
1600996865
|
We present a deterministic algorithm that given a tree T with n vertices, a starting vertex v and a slackness parameter epsilon > 0, estimates within an additive error of epsilon the cover and return time, namely, the expected time it takes a simple random walk that starts at v to visit all vertices of T and return to v. The running time of our algorithm is polynomial in n epsilon, and hence remains polynomial in n also for epsilon = 1 n^ O(1) . We also show how the algorithm can be extended to estimate the expected cover (without return) time on trees.
|
When one seeks to estimate the cover time from a given vertex (rather than from the worst possible vertex), the known bounds deteriorate. The deterministic algorithms known @cite_0 @cite_10 pay an extra @math factor in the approximation ratio compared to the approximation ratios known from worst possible vertex. For the special case of trees, some upper bounds are presented in @cite_3 .
|
{
"cite_N": [
"@cite_0",
"@cite_10",
"@cite_3"
],
"mid": [
"1989885196",
"1970421460",
"2913110029"
],
"abstract": [
"The cover time is the expected time it takes a random walk to cover all vertices of a graph. Despite the fact that it can be approximated with arbitrary precision by a simple polynomial time Monte-Carlo algorithm which simulates the random walk, it is not known whether the cover time of a graph can be computed in deterministic polynomial time. In the present paper we establish a deterministic polynomial time algorithm that, for any graph and any starting vertex, approximates the cover time within polylogarithmic factors. More generally, our algorithm approximates the cover time for arbitrary reversible Markov chains. The new aspect of our algorithm is that the starting vertex of the random walk may be arbitrary and is given as part of the input, whereas previous deterministic approximation algorithms for the cover time assume that the walk starts at the worst possible vertex. In passing, we show that the starting vertex can make a difference of up to a multiplicative factor of Θ(n3 2 √log n) in the cover time of an n-vertex graph.",
"Feige and Rabinovich, in [Feige and Rabinovich, Rand. Struct. Algorithms 23(1) (2003) 1-22], gave a deterministic O(log4n) approximation for the time it takes a random walk to cover a given graph starting at a given vertex. This approximation algorithm was shown to work for arbitrary reversible Markov chains. We build on the results of [Feige and Rabinovich, Rand. Struct. Algorithms 23(1) (2003) 1-22], and show that the original algorithm gives a O(log2n) approximation as it is, and that it can be modified to give a O(log n(log log n)2) approximation. Moreover, we show that given any c(n)-approximation algorithm for the maximum cover time (maximized over all initial vertices) of a reversible Markov chain, we can give a corresponding algorithm for the general cover time (of a random walk or reversible Markov chain) with approximation ratio O(c(n) log n).",
"We consider the cover time E>sub sub sub sub sub sub sub sub sub sub 2n^2. This improves the leadingconstant in previously known upper bounds. also provide upper bouhnds on E^+_uG, the expected timeto cover G and return to u"
]
}
|
0909.2005
|
1600996865
|
We present a deterministic algorithm that given a tree T with n vertices, a starting vertex v and a slackness parameter epsilon > 0, estimates within an additive error of epsilon the cover and return time, namely, the expected time it takes a simple random walk that starts at v to visit all vertices of T and return to v. The running time of our algorithm is polynomial in n epsilon, and hence remains polynomial in n also for epsilon = 1 n^ O(1) . We also show how the algorithm can be extended to estimate the expected cover (without return) time on trees.
|
There are some special families of graphs for which the cover time is known exactly (e.g., for paths, cycles and complete graphs), or almost exactly (e.g., for balanced trees @cite_5 and for two and higher dimensional grids @cite_6 @cite_11 ).
|
{
"cite_N": [
"@cite_5",
"@cite_6",
"@cite_11"
],
"mid": [
"1971599265",
"2120076230",
"1586184796"
],
"abstract": [
"Abstract For simple random walk on a finite tree, the cover time is the time taken to visit every vertex. For the balanced b-ary tree of height m, the cover time is shown to be asymptotic to 2m 2 b m + 1 ( log b) (b − 1) as m → ∞ . On the uniform random labeled tree on n vertices, we give a convincing heuristic argument that the mean time to cover and return to the root is asymptotic to 6(2π) 1 2 n 3 2 , and prove a weak O(n 3 2 ) upper bound. The argument rests upon a recursive formula for cover time of trees generated by a simple branching process.",
"LetT (x;\") denote the rst hitting time of the disc of radius \" centered at x for Brownian motion on the two dimensional torus T 2 . We prove that sup x2T2T (x;\")=j log\"j 2 ! 2= as \" ! 0. The same applies to Brownian motion on any smooth, compact connected, two- dimensional, Riemannian manifold with unit area and no boundary. As a consequence, we prove a conjecture, due to Aldous (1989), that the number of steps it takes a simple random walk to cover all points of the lattice torus Z 2 is asymptotic to 4n 2 (logn) 2 = . Determining these asymptotics is an essential step toward analyzing the fractal structure of the set of uncovered sites before coverage is complete; so far, this structure was only studied non-rigorously in the physics literature. We also establish a conjecture, due to Kesten and R ev esz, that describes the asymptotics for the number of steps needed by simple random walk in Z 2 to cover the disc of radius n.",
""
]
}
|
0909.1830
|
2131023055
|
This paper presents greedy gossip with eavesdropping (GGE), a novel randomized gossip algorithm for distributed computation of the average consensus problem. In gossip algorithms, nodes in the network randomly communicate with their neighbors and exchange information iteratively. The algorithms are simple and decentralized, making them attractive for wireless network applications. In general, gossip algorithms are robust to unreliable wireless conditions and time varying network topologies. In this paper, we introduce GGE and demonstrate that greedy updates lead to rapid convergence. We do not require nodes to have any location information. Instead, greedy updates are made possible by exploiting the broadcast nature of wireless communications. During the operation of GGE, when a node decides to gossip, instead of choosing one of its neighbors at random, it makes a greedy selection, choosing the node which has the value most different from its own. In order to make this selection, nodes need to know their neighbors' values. Therefore, we assume that all transmissions are wireless broadcasts and nodes keep track of their neighbors' values by eavesdropping on their communications. We show that the convergence of GGE is guaranteed for connected network topologies. We also study the rates of convergence and illustrate, through theoretical bounds and numerical simulations, that GGE consistently outperforms randomized gossip and performs comparably to geographic gossip on moderate-sized random geometric graph topologies.
|
Randomized gossip was proposed in @cite_19 as a decentralized asynchronous scheme for solving the average consensus problem. At the @math th iteration of randomized gossip, a node @math is chosen uniformly at random. It chooses a neighbor, @math , randomly, and this pair of nodes gossips'': @math and @math exchange values and perform the update @math , and all other nodes remain unchanged. One can show that under very mild conditions on the way a random neighbor, @math , is drawn, the values @math converge to @math at every node @math as @math @cite_9 . Because of the broadcast nature of wireless transmission, other neighbors overhear the messages exchanged between the active pair of nodes, but they do not make use of this information in existing randomized gossip algorithms.
|
{
"cite_N": [
"@cite_19",
"@cite_9"
],
"mid": [
"2117905067",
"2074796812"
],
"abstract": [
"Motivated by applications to sensor, peer-to-peer, and ad hoc networks, we study distributed algorithms, also known as gossip algorithms, for exchanging information and for computing in an arbitrarily connected network of nodes. The topology of such networks changes continuously as new nodes join and old nodes leave the network. Algorithms for such networks need to be robust against changes in topology. Additionally, nodes in sensor networks operate under limited computational, communication, and energy resources. These constraints have motivated the design of \"gossip\" algorithms: schemes which distribute the computational burden and in which a node communicates with a randomly chosen neighbor. We analyze the averaging problem under the gossip constraint for an arbitrary network graph, and find that the averaging time of a gossip algorithm depends on the second largest eigenvalue of a doubly stochastic matrix characterizing the algorithm. Designing the fastest gossip algorithm corresponds to minimizing this eigenvalue, which is a semidefinite program (SDP). In general, SDPs cannot be solved in a distributed fashion; however, exploiting problem structure, we propose a distributed subgradient method that solves the optimization problem over the network. The relation of averaging time to the second largest eigenvalue naturally relates it to the mixing time of a random walk with transition probabilities derived from the gossip algorithm. We use this connection to study the performance and scaling of gossip algorithms on two popular networks: Wireless Sensor Networks, which are modeled as Geometric Random Graphs, and the Internet graph under the so-called Preferential Connectivity (PC) model.",
"Abstract We consider the problem of finding a linear iteration that yields distributed averaging consensus over a network, i.e., that asymptotically computes the average of some initial values given at the nodes. When the iteration is assumed symmetric, the problem of finding the fastest converging linear iteration can be cast as a semidefinite program, and therefore efficiently and globally solved. These optimal linear iterations are often substantially faster than several common heuristics that are based on the Laplacian of the associated graph. We show how problem structure can be exploited to speed up interior-point methods for solving the fastest distributed linear iteration problem, for networks with up to a thousand or so edges. We also describe a simple subgradient method that handles far larger problems, with up to 100 000 edges. We give several extensions and variations on the basic problem."
]
}
|
0909.1830
|
2131023055
|
This paper presents greedy gossip with eavesdropping (GGE), a novel randomized gossip algorithm for distributed computation of the average consensus problem. In gossip algorithms, nodes in the network randomly communicate with their neighbors and exchange information iteratively. The algorithms are simple and decentralized, making them attractive for wireless network applications. In general, gossip algorithms are robust to unreliable wireless conditions and time varying network topologies. In this paper, we introduce GGE and demonstrate that greedy updates lead to rapid convergence. We do not require nodes to have any location information. Instead, greedy updates are made possible by exploiting the broadcast nature of wireless communications. During the operation of GGE, when a node decides to gossip, instead of choosing one of its neighbors at random, it makes a greedy selection, choosing the node which has the value most different from its own. In order to make this selection, nodes need to know their neighbors' values. Therefore, we assume that all transmissions are wireless broadcasts and nodes keep track of their neighbors' values by eavesdropping on their communications. We show that the convergence of GGE is guaranteed for connected network topologies. We also study the rates of convergence and illustrate, through theoretical bounds and numerical simulations, that GGE consistently outperforms randomized gossip and performs comparably to geographic gossip on moderate-sized random geometric graph topologies.
|
The convergence rate of randomized gossip is characterized by relating the algorithm to a Markov chain @cite_19 . The mixing time of this Markov chain is closely related to the averaging time of the gossip algorithm, and therefore defines the rate of convergence. For certain types of graph topologies, the mixing times are small and convergence of the gossip algorithm is fast. For example, in the case of a complete graph, the algorithm requires @math iterations to converge. However topologies such as random geometric graphs @cite_2 or grids are more realistic for wireless applications. @cite_19 prove that for random geometric graphs, randomized gossip requires @math transmissions to approximate the average consensus well Throughout this paper, when we refer to randomized gossip we specifically mean the natural random walk version of the algorithm, where the node @math is chosen uniformly from the set of neighbors of @math at each iteration. For random geometric graph topologies, which are of most interest to us, @cite_19 prove that the performance of the natural random walk algorithm scales order-wise identically to that of the optimal choice of transition probabilities, so there is no loss of generality. .
|
{
"cite_N": [
"@cite_19",
"@cite_2"
],
"mid": [
"2117905067",
"2137775453"
],
"abstract": [
"Motivated by applications to sensor, peer-to-peer, and ad hoc networks, we study distributed algorithms, also known as gossip algorithms, for exchanging information and for computing in an arbitrarily connected network of nodes. The topology of such networks changes continuously as new nodes join and old nodes leave the network. Algorithms for such networks need to be robust against changes in topology. Additionally, nodes in sensor networks operate under limited computational, communication, and energy resources. These constraints have motivated the design of \"gossip\" algorithms: schemes which distribute the computational burden and in which a node communicates with a randomly chosen neighbor. We analyze the averaging problem under the gossip constraint for an arbitrary network graph, and find that the averaging time of a gossip algorithm depends on the second largest eigenvalue of a doubly stochastic matrix characterizing the algorithm. Designing the fastest gossip algorithm corresponds to minimizing this eigenvalue, which is a semidefinite program (SDP). In general, SDPs cannot be solved in a distributed fashion; however, exploiting problem structure, we propose a distributed subgradient method that solves the optimization problem over the network. The relation of averaging time to the second largest eigenvalue naturally relates it to the mixing time of a random walk with transition probabilities derived from the gossip algorithm. We use this connection to study the performance and scaling of gossip algorithms on two popular networks: Wireless Sensor Networks, which are modeled as Geometric Random Graphs, and the Internet graph under the so-called Preferential Connectivity (PC) model.",
"When n identical randomly located nodes, each capable of transmitting at W bits per second and using a fixed range, form a wireless network, the throughput spl lambda (n) obtainable by each node for a randomly chosen destination is spl Theta (W spl radic (nlogn)) bits per second under a noninterference protocol. If the nodes are optimally placed in a disk of unit area, traffic patterns are optimally assigned, and each transmission's range is optimally chosen, the bit-distance product that can be transported by the network per second is spl Theta (W spl radic An) bit-meters per second. Thus even under optimal circumstances, the throughput is only spl Theta (W spl radic n) bits per second for each node for a destination nonvanishingly far away. Similar results also hold under an alternate physical model where a required signal-to-interference ratio is specified for successful receptions. Fundamentally, it is the need for every node all over the domain to share whatever portion of the channel it is utilizing with nodes in its local neighborhood that is the reason for the constriction in capacity. Splitting the channel into several subchannels does not change any of the results. Some implications may be worth considering by designers. Since the throughput furnished to each user diminishes to zero as the number of users is increased, perhaps networks connecting smaller numbers of users, or featuring connections mostly with nearby neighbors, may be more likely to be find acceptance."
]
}
|
0909.1830
|
2131023055
|
This paper presents greedy gossip with eavesdropping (GGE), a novel randomized gossip algorithm for distributed computation of the average consensus problem. In gossip algorithms, nodes in the network randomly communicate with their neighbors and exchange information iteratively. The algorithms are simple and decentralized, making them attractive for wireless network applications. In general, gossip algorithms are robust to unreliable wireless conditions and time varying network topologies. In this paper, we introduce GGE and demonstrate that greedy updates lead to rapid convergence. We do not require nodes to have any location information. Instead, greedy updates are made possible by exploiting the broadcast nature of wireless communications. During the operation of GGE, when a node decides to gossip, instead of choosing one of its neighbors at random, it makes a greedy selection, choosing the node which has the value most different from its own. In order to make this selection, nodes need to know their neighbors' values. Therefore, we assume that all transmissions are wireless broadcasts and nodes keep track of their neighbors' values by eavesdropping on their communications. We show that the convergence of GGE is guaranteed for connected network topologies. We also study the rates of convergence and illustrate, through theoretical bounds and numerical simulations, that GGE consistently outperforms randomized gossip and performs comparably to geographic gossip on moderate-sized random geometric graph topologies.
|
Motivated by the slow convergence of randomized gossip, introduced in @cite_0 . Geographic gossip enables information exchange over multiple hops with the assumption that nodes have the knowledge of their geographic locations and the locations of their neighbors. It has been shown that long-range information exchange improves the rate of convergence to @math for random geometric graphs. However, geographic gossip involves overhead due to localization and geographic routing. Furthermore, the network needs to provide reliable two-way transmission over many hops. Otherwise, messages which are lost in transit will result in biasing the average consensus computation.
|
{
"cite_N": [
"@cite_0"
],
"mid": [
"2135241601"
],
"abstract": [
"Gossip algorithms for aggregation have recently received significant attention for sensor network applications because of their simplicity and robustness in noisy and uncertain environments. However, gossip algorithms can waste significant energy by essentially passing around redundant information multiple times. For realistic sensor network model topologies like grids and random geometric graphs, the inefficiency of gossip schemes is caused by slow mixing times of random walks on those graphs. We propose and analyze an alternative gossiping scheme that exploits geographic information. By utilizing a simple resampling method, we can demonstrate substantial gains over previously proposed gossip protocols. In particular, for random geometric graphs, our algorithm computes the true average to accuracy 1 n sup a using O(n sup 1.5 spl radic (logn)) radio transmissions, which reduces the energy consumption by a spl radic (n logn) factor over standard gossip algorithms."
]
}
|
0909.1830
|
2131023055
|
This paper presents greedy gossip with eavesdropping (GGE), a novel randomized gossip algorithm for distributed computation of the average consensus problem. In gossip algorithms, nodes in the network randomly communicate with their neighbors and exchange information iteratively. The algorithms are simple and decentralized, making them attractive for wireless network applications. In general, gossip algorithms are robust to unreliable wireless conditions and time varying network topologies. In this paper, we introduce GGE and demonstrate that greedy updates lead to rapid convergence. We do not require nodes to have any location information. Instead, greedy updates are made possible by exploiting the broadcast nature of wireless communications. During the operation of GGE, when a node decides to gossip, instead of choosing one of its neighbors at random, it makes a greedy selection, choosing the node which has the value most different from its own. In order to make this selection, nodes need to know their neighbors' values. Therefore, we assume that all transmissions are wireless broadcasts and nodes keep track of their neighbors' values by eavesdropping on their communications. We show that the convergence of GGE is guaranteed for connected network topologies. We also study the rates of convergence and illustrate, through theoretical bounds and numerical simulations, that GGE consistently outperforms randomized gossip and performs comparably to geographic gossip on moderate-sized random geometric graph topologies.
|
have proposed broadcast gossip , a consensus algorithm that also makes use of the broadcast nature of wireless networks @cite_16 @cite_17 . At each iteration, a node is activated uniformly at random to broadcast its value. All nodes within transmission range of the broadcasting node calculate a weighted average of their own value and the broadcasted value, and they update their local value with this weighted average. Broadcast gossip does not preserve the network average at each iteration. It achieves a low variance (i.e., rapid convergence), but introduces bias: the value to which broadcast gossip converges can be significantly different from the true average.
|
{
"cite_N": [
"@cite_16",
"@cite_17"
],
"mid": [
"2148330908",
"2140637933"
],
"abstract": [
"Motivated by applications to wireless sensor, peer-to-peer, and ad hoc networks, we study distributed broadcasting algorithms for exchanging information and for computing in an arbitrarily connected network of nodes. Specifically, we propose a broadcasting-based gossiping algorithm to compute the (possibly weighted) average of the initial measurements of the nodes at every node in the network. We show that the broadcast gossip algorithms almost surely converge to a consensus. In addition, the random consensus value is, in expectation, equal to the desired value, i.e., the average of initial node measurements. However, the broadcast gossip algorithms do not converge to the initial average in absolute sense because of the fact that the sum is not preserved at every iteration. We provide theoretical results on the mean square error performance of the broadcast gossip algorithms. The results indicate that the mean square error strictly decreases through iterations until the consensus is achieved. Finally, we assess and compare the communication cost of the broadcast gossip algorithms required to achieve a given distance to consensus through numerical simulations.",
"Motivated by applications to wireless sensor, peer-to-peer, and ad hoc networks, we have recently proposed a broadcasting-based gossiping protocol to compute the (possibly weighted) average of the initial measurements of the nodes at every node in the network. The class of broadcast gossip algorithms achieve consensus almost surely at a value that is in the neighborhood of the initial node measurements? average. In this paper, we further study the broadcast gossip algorithms: we derive and analyze the optimal mixing parameter of the algorithm when approached from worst-case convergence rate, present theoretical results on limiting mean square error performance of the algorithm, and find the convergence rate order of the proposed protocol."
]
}
|
0909.1830
|
2131023055
|
This paper presents greedy gossip with eavesdropping (GGE), a novel randomized gossip algorithm for distributed computation of the average consensus problem. In gossip algorithms, nodes in the network randomly communicate with their neighbors and exchange information iteratively. The algorithms are simple and decentralized, making them attractive for wireless network applications. In general, gossip algorithms are robust to unreliable wireless conditions and time varying network topologies. In this paper, we introduce GGE and demonstrate that greedy updates lead to rapid convergence. We do not require nodes to have any location information. Instead, greedy updates are made possible by exploiting the broadcast nature of wireless communications. During the operation of GGE, when a node decides to gossip, instead of choosing one of its neighbors at random, it makes a greedy selection, choosing the node which has the value most different from its own. In order to make this selection, nodes need to know their neighbors' values. Therefore, we assume that all transmissions are wireless broadcasts and nodes keep track of their neighbors' values by eavesdropping on their communications. We show that the convergence of GGE is guaranteed for connected network topologies. We also study the rates of convergence and illustrate, through theoretical bounds and numerical simulations, that GGE consistently outperforms randomized gossip and performs comparably to geographic gossip on moderate-sized random geometric graph topologies.
|
Sundhar introduced a general class of incremental subgradient algorithms for distributed optimization in @cite_20 . In this study, the effects of stochastic errors (e.g., due to quantization) on the convergence of consensus-like distributed optimization algorithms are investigated. Convergence of their algorithm is guaranteed under certain conditions on the errors, but the convergence rates are not characterized.
|
{
"cite_N": [
"@cite_20"
],
"mid": [
"1973754217"
],
"abstract": [
"This paper studies the effect of stochastic errors on two constrained incremental subgradient algorithms. The incremental subgradient algorithms are viewed as decentralized network optimization algorithms as applied to minimize a sum of functions, when each component function is known only to a particular agent of a distributed network. First, the standard cyclic incremental subgradient algorithm is studied. In this, the agents form a ring structure and pass the iterate in a cycle. When there are stochastic errors in the subgradient evaluations, sufficient conditions on the moments of the stochastic errors are obtained that guarantee almost sure convergence when a diminishing step-size is used. In addition, almost sure bounds on the algorithm's performance with a constant step-size are also obtained. Next, the Markov randomized incremental subgradient method is studied. This is a noncyclic version of the incremental algorithm where the sequence of computing agents is modeled as a time nonhomogeneous Markov chain. Such a model is appropriate for mobile networks, as the network topology changes across time in these networks. Convergence results and error bounds for the Markov randomized method in the presence of stochastic errors for diminishing and constant step-sizes are obtained."
]
}
|
0909.1830
|
2131023055
|
This paper presents greedy gossip with eavesdropping (GGE), a novel randomized gossip algorithm for distributed computation of the average consensus problem. In gossip algorithms, nodes in the network randomly communicate with their neighbors and exchange information iteratively. The algorithms are simple and decentralized, making them attractive for wireless network applications. In general, gossip algorithms are robust to unreliable wireless conditions and time varying network topologies. In this paper, we introduce GGE and demonstrate that greedy updates lead to rapid convergence. We do not require nodes to have any location information. Instead, greedy updates are made possible by exploiting the broadcast nature of wireless communications. During the operation of GGE, when a node decides to gossip, instead of choosing one of its neighbors at random, it makes a greedy selection, choosing the node which has the value most different from its own. In order to make this selection, nodes need to know their neighbors' values. Therefore, we assume that all transmissions are wireless broadcasts and nodes keep track of their neighbors' values by eavesdropping on their communications. We show that the convergence of GGE is guaranteed for connected network topologies. We also study the rates of convergence and illustrate, through theoretical bounds and numerical simulations, that GGE consistently outperforms randomized gossip and performs comparably to geographic gossip on moderate-sized random geometric graph topologies.
|
Nedi 'c and Ozdaglar have also developed a distributed form of incremental subgradient optimization that generalizes the consensus framework @cite_13 . Our problem formulation is not as general as theirs, but with the specific formulation addressed in this paper we achieve stronger results. In particular, our cost function has a specific form and, by exploiting it, we are able to guarantee convergence to an optimal solution and obtain tight bounds on the rate of convergence as a function of the network topology.
|
{
"cite_N": [
"@cite_13"
],
"mid": [
"2044212084"
],
"abstract": [
"We study a distributed computation model for optimizing a sum of convex objective functions corresponding to multiple agents. For solving this (not necessarily smooth) optimization problem, we consider a subgradient method that is distributed among the agents. The method involves every agent minimizing his her own objective function while exchanging information locally with other agents in the network over a time-varying topology. We provide convergence results and convergence rate estimates for the subgradient method. Our convergence rate results explicitly characterize the tradeoff between a desired accuracy of the generated approximate optimal solutions and the number of iterations needed to achieve the accuracy."
]
}
|
0909.1977
|
1705501928
|
We propose a methodology for the automatic verification of safety properties of controllers based on dynamical systems, such as those typically used in avionics. In particular, our focus is on proving stability properties of software implementing linear and some non-linear controllers. We develop an abstract interpretation framework that follows closely the Lyapunov methods used in proofs at the model level and describe the corresponding abstract domains, which for linear systems consist of ellipsoidal constraints. These ellipsoidal domains provide abstractions for the values of state variables and must be combined with other domains that model the remaining variables in a program. Thus, the problem of automatically assigning the right type of abstract domain to each variable arises. We provide an algorithm that solves this classification problem in many practical cases and suggest how it could be generalized to more complicated cases. We then find a fixpoint by solving a matrix equation, which in the linear case is just the discrete Lyapunov equation. Contrary to most cases in software analysis, this fixpoint cannot be reached by the usual iterative method of propagating constraints until saturation and so numerical methods become essential. Finally, we illustrate our methodology with several examples.
|
The study of stability of controllers at the theoretical level is a very mature subject, especially for linear systems. Two techniques are available in that case: eigenvalue analysis and Lyapunov methods. While eigenvalue analysis is more straightforward, Lyapunov methods generalize better to non-linear cases @cite_5 . Thus, most recent work uses the latter. Lyapunov methods give way to the study of propagation of ellipsoids such as that presented in this paper.
|
{
"cite_N": [
"@cite_5"
],
"mid": [
"2023285936"
],
"abstract": [
"Introduction. Non-linear Differential Equations. Second-Order Systems. Approximate Analysis Methods. Lyapunov Stability. Input-Output Stability. Differential Geometric Methods. Appendices: Prevalence of Differential Equations with Unique Solutions, Proof of the Kalman-Yacubovitch Lemma and Proof of the Frobenius Theorem."
]
}
|
0909.1977
|
1705501928
|
We propose a methodology for the automatic verification of safety properties of controllers based on dynamical systems, such as those typically used in avionics. In particular, our focus is on proving stability properties of software implementing linear and some non-linear controllers. We develop an abstract interpretation framework that follows closely the Lyapunov methods used in proofs at the model level and describe the corresponding abstract domains, which for linear systems consist of ellipsoidal constraints. These ellipsoidal domains provide abstractions for the values of state variables and must be combined with other domains that model the remaining variables in a program. Thus, the problem of automatically assigning the right type of abstract domain to each variable arises. We provide an algorithm that solves this classification problem in many practical cases and suggest how it could be generalized to more complicated cases. We then find a fixpoint by solving a matrix equation, which in the linear case is just the discrete Lyapunov equation. Contrary to most cases in software analysis, this fixpoint cannot be reached by the usual iterative method of propagating constraints until saturation and so numerical methods become essential. Finally, we illustrate our methodology with several examples.
|
Beyond the theoretical level, several papers have focused on the analysis at the model level @cite_6 @cite_2 . Among them, the approach most similar to ours was proposed by @cite_15 , who used ellipsoidal calculus to compute overapproximations to the reachability set of controller models. They gave algorithms to estimate unions and intersections of ellipsoids and implemented a tool called VeriSHIFT to automate the reachability computation of models designed with it. Their algorithm has been later improved by @cite_1 .
|
{
"cite_N": [
"@cite_15",
"@cite_1",
"@cite_6",
"@cite_2"
],
"mid": [
"2162196264",
"2155596590",
"2060095702",
"1530240157"
],
"abstract": [
"A general verification algorithm is described. It is then shown how ellipsoidal methods developed by A. B. Kurzhanski and P. Varaiya can be adapted to the algorithm. New numerical algorithms that compute approximations of unions of ellipsoids and intersections of ellipsoids and polyhedra were developed. The presented techniques were implemented in the verification tool called VeriSHIFT and some practical results are discussed.",
"A new approach is presented for computing approximations of the reached sets of linear hybrid automata. First, we present some new theoretical results on termination of a class of reachability algorithms, which includes Botchkarev's, based on ellipsoidal calculus. The main contribution of the paper is a revised reachability computation that avoids the approximations caused by the union operation in the discretized flow tube estimation. Therefore, the new algorithm may classify as unreachable states that are reachable according to the previous algorithm because of the looser over-approximations introduced by the union operation. We implemented the new reachability algorithm and tested it successfully on a real-life case modeling a hybrid model of a controlled car engine.",
"",
"Predicate abstraction has emerged to be a powerful technique for extracting finite-state models from infinite-state systems, and has been recently shown to enhance the effectiveness of the reachability computation techniques for hybrid systems. Given a hybrid system with linear dynamics and a set of linear predicates, the verifier performs an on-the-fly search of the finite discrete quotient whose states correspond to the truth assignments to the input predicates. To compute the transitions out of an abstract state, the tool needs to compute the set of discrete and continuous successors, and find out all the abstract states that this set intersects with. The complexity of this computation grows exponentially with the number of abstraction predicates. In this paper we present various optimizations that are aimed at speeding up the search in the abstract state-space, and demonstrate their benefits via case studies. We also discuss the completeness of the predicate abstraction technique for proving safety of hybrid systems."
]
}
|
0909.1977
|
1705501928
|
We propose a methodology for the automatic verification of safety properties of controllers based on dynamical systems, such as those typically used in avionics. In particular, our focus is on proving stability properties of software implementing linear and some non-linear controllers. We develop an abstract interpretation framework that follows closely the Lyapunov methods used in proofs at the model level and describe the corresponding abstract domains, which for linear systems consist of ellipsoidal constraints. These ellipsoidal domains provide abstractions for the values of state variables and must be combined with other domains that model the remaining variables in a program. Thus, the problem of automatically assigning the right type of abstract domain to each variable arises. We provide an algorithm that solves this classification problem in many practical cases and suggest how it could be generalized to more complicated cases. We then find a fixpoint by solving a matrix equation, which in the linear case is just the discrete Lyapunov equation. Contrary to most cases in software analysis, this fixpoint cannot be reached by the usual iterative method of propagating constraints until saturation and so numerical methods become essential. Finally, we illustrate our methodology with several examples.
|
At the implementation level, Cousot, in a follow-up of some earlier work @cite_9 , showed how to use parametric abstract domains and external constraint solvers to prove invariance and termination @cite_23 , illustrating it with several small examples. Simultaneously, @cite_22 used similar linear and semi-definite programing to search for Lyapunov invariants for boundedness and termination. These methods have been later extended to prove also non-termination @cite_19 . Termination has also been studied by Tiwari @cite_18 , who used eigenvalue techniques instead of Lyapunov methods to prove termination of linear programs very similar to those common in linear controllers. Automatic linear invariant generation by means of non-linear constraint solvers has also been studied by @cite_17 , and their work has been generalized to some non-linear invariants by @cite_0 .
|
{
"cite_N": [
"@cite_18",
"@cite_22",
"@cite_9",
"@cite_0",
"@cite_19",
"@cite_23",
"@cite_17"
],
"mid": [
"1575647584",
"1863722042",
"2132661148",
"2098045685",
"",
"2136333450",
"1563374593"
],
"abstract": [
"We show that termination of a class of linear loop programs is decidable. Linear loop programs are discrete-time linear systems with a loop condition governing termination, that is, a while loop with linear assignments. We relate the termination of such a simple loop, on all initial values, to the eigenvectors corresponding to only the positive real eigenvalues of the matrix defining the loop assignments. This characterization of termination is reminiscent of the famous stability theorems in control theory that characterize stability in terms of eigenvalues.",
"Modeling and analysis techniques are presented for real-time, safety-critical software. Software analysis is the task of verifying whether the computer code will execute safely, free of run-time errors. The critical properties that prove safe execution include bounded-ness of variables and termination of the program in finite time. In this paper, dynamical system representations of computer programs along with specific models that are pertinent to analysis via an optimization-based search for system invariants are developed. It is shown that the automatic search for system invariants that establish the desired properties of computer code, can be formulated as a convex optimization problem, such as linear programming, semidefinite programming, and or sum of squares programming.",
"",
"We present a new technique for the generation of non-linear (algebraic) invariants of a program. Our technique uses the theory of ideals over polynomial rings to reduce the non-linear invariant generation problem to a numerical constraint solving problem. So far, the literature on invariant generation has been focussed on the construction of linear invariants for linear programs. Consequently, there has been little progress toward non-linear invariant generation. In this paper, we demonstrate a technique that encodes the conditions for a given template assertion being an invariant into a set of constraints, such that all the solutions to these constraints correspond to non-linear (algebraic) loop invariants of the program. We discuss some trade-offs between the completeness of the technique and the tractability of the constraint-solving problem generated. The application of the technique is demonstrated on a few examples.",
"",
"In order to verify semialgebraic programs, we automatize the Floyd Naur Hoare proof method. The main task is to automatically infer valid invariants and rank functions. First we express the program semantics in polynomial form. Then the unknown rank function and invariants are abstracted in parametric form. The implication in the Floyd Naur Hoare verification conditions is handled by abstraction into numerical constraints by Lagrangian relaxation. The remaining universal quantification is handled by semidefinite programming relaxation. Finally the parameters are computed using semidefinite programming solvers. This new approach exploits the recent progress in the numerical resolution of linear or bilinear matrix inequalities by semidefinite programming using efficient polynomial primal dual interior point methods generalizing those well-known in linear programming to convex optimization. The framework is applied to invariance and termination proof of sequential, nondeterministic, concurrent, and fair parallel imperative polynomial programs and can easily be extended to other safety and liveness properties.",
"We present a new method for the generation of linear invariants which reduces the problem to a non-linear constraint solving problem. Our method, based on Farkas’ Lemma, synthesizes linear invariants by extracting non-linear constraints on the coefficients of a target invariant from a program. These constraints guarantee that the linear invariant is inductive. We then apply existing techniques, including specialized quantifier elimination methods over the reals, to solve these non-linear constraints. Our method has the advantage of being complete for inductive invariants. To our knowledge, this is the first sound and complete technique for generating inductive invariants of this form. We illustrate the practicality of our method on several examples, including cases in which traditional methods based on abstract interpretation with widening fail to generate sufficiently strong invariants."
]
}
|
0909.1977
|
1705501928
|
We propose a methodology for the automatic verification of safety properties of controllers based on dynamical systems, such as those typically used in avionics. In particular, our focus is on proving stability properties of software implementing linear and some non-linear controllers. We develop an abstract interpretation framework that follows closely the Lyapunov methods used in proofs at the model level and describe the corresponding abstract domains, which for linear systems consist of ellipsoidal constraints. These ellipsoidal domains provide abstractions for the values of state variables and must be combined with other domains that model the remaining variables in a program. Thus, the problem of automatically assigning the right type of abstract domain to each variable arises. We provide an algorithm that solves this classification problem in many practical cases and suggest how it could be generalized to more complicated cases. We then find a fixpoint by solving a matrix equation, which in the linear case is just the discrete Lyapunov equation. Contrary to most cases in software analysis, this fixpoint cannot be reached by the usual iterative method of propagating constraints until saturation and so numerical methods become essential. Finally, we illustrate our methodology with several examples.
|
A different route was followed by @cite_4 with the design and implementation of the heavy-duty ASTREE static analyzer based on abstract interpretation @cite_21 . The novelty of the ASTREE analyzer is the combination of multiple abstract domains @cite_13 , such as rectangular, polyhedral, octagonal and an ad-hoc 2-dimensional version of an ellipsoidal domain, and its application to large programs.
|
{
"cite_N": [
"@cite_13",
"@cite_21",
"@cite_4"
],
"mid": [
"2113159073",
"2043100293",
"2170736936"
],
"abstract": [
"We describe the structure of the abstract domains in the ASTREE static analyzer, their modular organization into a hierarchical network, their cooperation to over-approximate the conjunction reduced product of different abstractions and to ensure termination using collaborative widenings and narrowings. This separation of the abstraction into a combination of cooperative abstract domains makes ASTREE extensible, an essential feature to cope with false alarms and ultimately provide sound formal verification of the absence of runtime errors in very large software.",
"A program denotes computations in some universe of objects. Abstract interpretation of programs consists in using that denotation to describe computations in another universe of abstract objects, so that the results of abstract execution give some information on the actual computations. An intuitive example (which we borrow from Sintzoff [72]) is the rule of signs. The text -1515 * 17 may be understood to denote computations on the abstract universe (+), (-), (±) where the semantics of arithmetic operators is defined by the rule of signs. The abstract execution -1515 * 17 → -(+) * (+) → (-) * (+) → (-), proves that -1515 * 17 is a negative number. Abstract interpretation is concerned by a particular underlying structure of the usual universe of computations (the sign, in our example). It gives a summary of some facets of the actual executions of a program. In general this summary is simple to obtain but inaccurate (e.g. -1515 + 17 → -(+) + (+) → (-) + (+) → (±)). Despite its fundamentally incomplete results abstract interpretation allows the programmer or the compiler to answer questions which do not need full knowledge of program executions or which tolerate an imprecise answer, (e.g. partial correctness proofs of programs ignoring the termination problems, type checking, program optimizations which are not carried in the absence of certainty about their feasibility, …).",
"We show that abstract interpretation-based static program analysis can be made efficient and precise enough to formally verify a class of properties for a family of large programs with few or no false alarms. This is achieved by refinement of a general purpose static analyzer and later adaptation to particular programs of the family by the end-user through parametrization. This is applied to the proof of soundness of data manipulation operations at the machine level for periodic synchronous safety critical embedded software.The main novelties are the design principle of static analyzers by refinement and adaptation through parametrization (Sect. 3 and 7), the symbolic manipulation of expressions to improve the precision of abstract transfer functions (Sect. 6.3), the octagon (Sect. 6.2.2), ellipsoid (Sect. 6.2.3), and decision tree (Sect. 6.2.4) abstract domains, all with sound handling of rounding errors in oating point computations, widening strategies (with thresholds: Sect. 7.1.2, delayed: Sect. 7.1.3) and the automatic determination of the parameters (parametrized packing: Sect. 7.2)."
]
}
|
0909.2290
|
2949363019
|
Several anonymization techniques, such as generalization and bucketization, have been designed for privacy preserving microdata publishing. Recent work has shown that generalization loses considerable amount of information, especially for high-dimensional data. Bucketization, on the other hand, does not prevent membership disclosure and does not apply for data that do not have a clear separation between quasi-identifying attributes and sensitive attributes. In this paper, we present a novel technique called slicing, which partitions the data both horizontally and vertically. We show that slicing preserves better data utility than generalization and can be used for membership disclosure protection. Another important advantage of slicing is that it can handle high-dimensional data. We show how slicing can be used for attribute disclosure protection and develop an efficient algorithm for computing the sliced data that obey the l-diversity requirement. Our workload experiments confirm that slicing preserves better utility than generalization and is more effective than bucketization in workloads involving the sensitive attribute. Our experiments also demonstrate that slicing can be used to prevent membership disclosure.
|
Two popular anonymization techniques are generalization and bucketization. Generalization @cite_4 @cite_14 @cite_13 replaces a value with a less-specific but semantically consistent'' value. Three types of encoding schemes have been proposed for generalization: global recoding, regional recoding, and local recoding. Global recoding has the property that multiple occurrences of the same value are always replaced by the same generalized value. Regional record @cite_1 is also called multi-dimensional recoding (the Mondrian algorithm) which partitions the domain space into non-intersect regions and data points in the same region are represented by the region they are in. Local recoding does not have the above constraints and allows different occurrences of the same value to be generalized differently.
|
{
"cite_N": [
"@cite_13",
"@cite_1",
"@cite_14",
"@cite_4"
],
"mid": [
"2119047901",
"2135581534",
"2159024459",
""
],
"abstract": [
"Often a data holder, such as a hospital or bank, needs to share person-specific records in such a way that the identities of the individuals who are the subjects of the data cannot be determined. One way to achieve this is to have the released records adhere to k- anonymity, which means each released record has at least (k-1) other records in the release whose values are indistinct over those fields that appear in external data. So, k- anonymity provides privacy protection by guaranteeing that each released record will relate to at least k individuals even if the records are directly linked to external information. This paper provides a formal presentation of combining generalization and suppression to achieve k-anonymity. Generalization involves replacing (or recoding) a value with a less specific but semantically consistent value. Suppression involves not releasing a value at all. The Preferred Minimal Generalization Algorithm (MinGen), which is a theoretical algorithm presented herein, combines these techniques to provide k-anonymity protection with minimal distortion. The real-world algorithms Datafly and µ-Argus are compared to MinGen. Both Datafly and µ-Argus use heuristics to make approximations, and so, they do not always yield optimal results. It is shown that Datafly can over distort data and µ-Argus can additionally fail to provide adequate protection.",
"K-Anonymity has been proposed as a mechanism for protecting privacy in microdata publishing, and numerous recoding \"models\" have been considered for achieving 𝑘anonymity. This paper proposes a new multidimensional model, which provides an additional degree of flexibility not seen in previous (single-dimensional) approaches. Often this flexibility leads to higher-quality anonymizations, as measured both by general-purpose metrics and more specific notions of query answerability. Optimal multidimensional anonymization is NP-hard (like previous optimal 𝑘-anonymity problems). However, we introduce a simple greedy approximation algorithm, and experimental results show that this greedy algorithm frequently leads to more desirable anonymizations than exhaustive optimal algorithms for two single-dimensional models.",
"Consider a data holder, such as a hospital or a bank, that has a privately held collection of person-specific, field structured data. Suppose the data holder wants to share a version of the data with researchers. How can a data holder release a version of its private data with scientific guarantees that the individuals who are the subjects of the data cannot be re-identified while the data remain practically useful? The solution provided in this paper includes a formal protection model named k-anonymity and a set of accompanying policies for deployment. A release provides k-anonymity protection if the information for each person contained in the release cannot be distinguished from at least k-1 individuals whose information also appears in the release. This paper also examines re-identification attacks that can be realized on releases that adhere to k- anonymity unless accompanying policies are respected. The k-anonymity protection model is important because it forms the basis on which the real-world systems known as Datafly, µ-Argus and k-Similar provide guarantees of privacy protection.",
""
]
}
|
0909.2290
|
2949363019
|
Several anonymization techniques, such as generalization and bucketization, have been designed for privacy preserving microdata publishing. Recent work has shown that generalization loses considerable amount of information, especially for high-dimensional data. Bucketization, on the other hand, does not prevent membership disclosure and does not apply for data that do not have a clear separation between quasi-identifying attributes and sensitive attributes. In this paper, we present a novel technique called slicing, which partitions the data both horizontally and vertically. We show that slicing preserves better data utility than generalization and can be used for membership disclosure protection. Another important advantage of slicing is that it can handle high-dimensional data. We show how slicing can be used for attribute disclosure protection and develop an efficient algorithm for computing the sliced data that obey the l-diversity requirement. Our workload experiments confirm that slicing preserves better utility than generalization and is more effective than bucketization in workloads involving the sensitive attribute. Our experiments also demonstrate that slicing can be used to prevent membership disclosure.
|
Bucketization @cite_8 @cite_15 @cite_11 first partitions tuples in the table into buckets and then separates the quasi-identifiers with the sensitive attribute by randomly permuting the sensitive attribute values in each bucket. The anonymized data consists of a set of buckets with permuted sensitive attribute values. In particular, bucketization has been used for anonymizing high-dimensional data @cite_34 . Please refer to and for a detailed comparison of slicing with generalization and bucketization, respectively.
|
{
"cite_N": [
"@cite_15",
"@cite_11",
"@cite_34",
"@cite_8"
],
"mid": [
"2099907571",
"2166658587",
"2084012970",
""
],
"abstract": [
"Recent work has shown the necessity of considering an attacker's background knowledge when reasoning about privacy in data publishing. However, in practice, the data publisher does not know what background knowledge the attacker possesses. Thus, it is important to consider the worst-case. In this paper, we initiate a formal study of worst-case background knowledge. We propose a language that can express any background knowledge about the data. We provide a polynomial time algorithm to measure the amount of disclosure of sensitive information in the worst case, given that the attacker has at most k pieces of information in this language. We also provide a method to efficiently sanitize the data so that the amount of disclosure in the worst case is less than a specified threshold.",
"Privacy is a serious concern when microdata need to be released for ad hoc analyses. The privacy goals of existing privacy protection approaches (e.g., k-anonymity and l-diversity) are suitable only for categorical sensitive attributes. Since applying them directly to numerical sensitive attributes (e.g., salary) may result in undesirable information leakage, we propose privacy goals to better capture the need of privacy protection for numerical sensitive attributes. Complementing the desire for privacy is the need to support ad hoc aggregate analyses over microdata. Existing generalization-based anonymization approaches cannot answer aggregate queries with reasonable accuracy. We present a general framework of permutation-based anonymization to support accurate answering of aggregate queries and show that, for the same grouping, permutation-based techniques can always answer aggregate queries more accurately than generalization-based approaches. We further propose several criteria to optimize permutations for accurate answering of aggregate queries, and develop efficient algorithms for each criterion.",
"Existing research on privacy-preserving data publishing focuses on relational data: in this context, the objective is to enforce privacy-preserving paradigms, such as k- anonymity and lscr-diversity, while minimizing the information loss incurred in the anonymizing process (i.e. maximize data utility). However, existing techniques adopt an indexing- or clustering- based approach, and work well for fixed-schema data, with low dimensionality. Nevertheless, certain applications require privacy-preserving publishing of transaction data (or basket data), which involves hundreds or even thousands of dimensions, rendering existing methods unusable. We propose a novel anonymization method for sparse high-dimensional data. We employ a particular representation that captures the correlation in the underlying data, and facilitates the formation of anonymized groups with low information loss. We propose an efficient anonymization algorithm based on this representation. We show experimentally, using real-life datasets, that our method clearly outperforms existing state-of-the-art in terms of both data utility and computational overhead.",
""
]
}
|
0909.2290
|
2949363019
|
Several anonymization techniques, such as generalization and bucketization, have been designed for privacy preserving microdata publishing. Recent work has shown that generalization loses considerable amount of information, especially for high-dimensional data. Bucketization, on the other hand, does not prevent membership disclosure and does not apply for data that do not have a clear separation between quasi-identifying attributes and sensitive attributes. In this paper, we present a novel technique called slicing, which partitions the data both horizontally and vertically. We show that slicing preserves better data utility than generalization and can be used for membership disclosure protection. Another important advantage of slicing is that it can handle high-dimensional data. We show how slicing can be used for attribute disclosure protection and develop an efficient algorithm for computing the sliced data that obey the l-diversity requirement. Our workload experiments confirm that slicing preserves better utility than generalization and is more effective than bucketization in workloads involving the sensitive attribute. Our experiments also demonstrate that slicing can be used to prevent membership disclosure.
|
Slicing has some connections to marginal publication @cite_32 ; both of them release correlations among a subset of attributes. Slicing is quite different from marginal publication in a number of aspects. First, marginal publication can be viewed as a special case of slicing which does not have horizontal partitioning. Therefore, correlations among attributes in different columns are lost in marginal publication. By horizontal partitioning, attribute correlations between different columns (at the bucket level) are preserved. Marginal publication is similar to overlapping vertical partitioning, which is left as our future work (See ). Second, the key idea of slicing is to preserve correlations between highly-correlated attributes and to break correlations between uncorrelated attributes, thus achieving both better utility and better privacy. Third, existing data analysis (e.g., query answering) methods can be easily used on the sliced data.
|
{
"cite_N": [
"@cite_32"
],
"mid": [
"2071776923"
],
"abstract": [
"Limiting disclosure in data publishing requires a careful balance between privacy and utility. Information about individuals must not be revealed, but a dataset should still be useful for studying the characteristics of a population. Privacy requirements such as k-anonymity and l-diversity are designed to thwart attacks that attempt to identify individuals in the data and to discover their sensitive information. On the other hand, the utility of such data has not been well-studied.In this paper we will discuss the shortcomings of current heuristic approaches to measuring utility and we will introduce a formal approach to measuring utility. Armed with this utility metric, we will show how to inject additional information into k-anonymous and l-diverse tables. This information has an intuitive semantic meaning, it increases the utility beyond what is possible in the original k-anonymity and l-diversity frameworks, and it maintains the privacy guarantees of k-anonymity and l-diversity."
]
}
|
0909.2290
|
2949363019
|
Several anonymization techniques, such as generalization and bucketization, have been designed for privacy preserving microdata publishing. Recent work has shown that generalization loses considerable amount of information, especially for high-dimensional data. Bucketization, on the other hand, does not prevent membership disclosure and does not apply for data that do not have a clear separation between quasi-identifying attributes and sensitive attributes. In this paper, we present a novel technique called slicing, which partitions the data both horizontally and vertically. We show that slicing preserves better data utility than generalization and can be used for membership disclosure protection. Another important advantage of slicing is that it can handle high-dimensional data. We show how slicing can be used for attribute disclosure protection and develop an efficient algorithm for computing the sliced data that obey the l-diversity requirement. Our workload experiments confirm that slicing preserves better utility than generalization and is more effective than bucketization in workloads involving the sensitive attribute. Our experiments also demonstrate that slicing can be used to prevent membership disclosure.
|
Existing privacy measures for membership disclosure protection include differential privacy @cite_28 @cite_26 @cite_23 and @math -presence @cite_6 . Differential privacy has recently received much attention in data privacy, especially for interactive databases @cite_28 @cite_36 @cite_26 @cite_23 @cite_17 . @cite_20 design the @math algorithm for data perturbation that satisfies differential privacy. @cite_21 apply the notion of differential privacy for synthetic data generation. On the other hand, @math -presence @cite_6 assumes that the published database is a sample of a large public database and the adversary has knowledge of this large database. The calculation of disclosure risk depends on this large database.
|
{
"cite_N": [
"@cite_26",
"@cite_28",
"@cite_36",
"@cite_21",
"@cite_6",
"@cite_23",
"@cite_20",
"@cite_17"
],
"mid": [
"",
"2110868467",
"2010523825",
"2080044359",
"2139864694",
"",
"2115209166",
"2097583584"
],
"abstract": [
"",
"We examine the tradeoff between privacy and usability of statistical databases. We model a statistical database by an n-bit string d 1 ,..,d n , with a query being a subset q ⊆ [n] to be answered by Σ ieq d i . Our main result is a polynomial reconstruction algorithm of data from noisy (perturbed) subset sums. Applying this reconstruction algorithm to statistical databases we show that in order to achieve privacy one has to add perturbation of magnitude (Ω√n). That is, smaller perturbation always results in a strong violation of privacy. We show that this result is tight by exemplifying access algorithms for statistical databases that preserve privacy while adding perturbation of magnitude O(√n).For time-T bounded adversaries we demonstrate a privacypreserving access algorithm whose perturbation magnitude is ≈ √T.",
"We consider a statistical database in which a trusted administrator introduces noise to the query responses with the goal of maintaining privacy of individual database entries. In such a database, a query consists of a pair (S, f) where S is a set of rows in the database and f is a function mapping database rows to 0, 1 . The true answer is Σ ieS f(d i ), and a noisy version is released as the response to the query. Results of Dinur, Dwork, and Nissim show that a strong form of privacy can be maintained using a surprisingly small amount of noise -- much less than the sampling error -- provided the total number of queries is sublinear in the number of database rows. We call this query and (slightly) noisy reply the SuLQ (Sub-Linear Queries) primitive. The assumption of sublinearity becomes reasonable as databases grow increasingly large.We extend this work in two ways. First, we modify the privacy analysis to real-valued functions f and arbitrary row types, as a consequence greatly improving the bounds on noise required for privacy. Second, we examine the computational power of the SuLQ primitive. We show that it is very powerful indeed, in that slightly noisy versions of the following computations can be carried out with very few invocations of the primitive: principal component analysis, k means clustering, the Perceptron Algorithm, the ID3 algorithm, and (apparently!) all algorithms that operate in the in the statistical query learning model [11].",
"In this paper, we propose the first formal privacy analysis of a data anonymization process known as the synthetic data generation, a technique becoming popular in the statistics community. The target application for this work is a mapping program that shows the commuting patterns of the population of the United States. The source data for this application were collected by the U.S. Census Bureau, but due to privacy constraints, they cannot be used directly by the mapping program. Instead, we generate synthetic data that statistically mimic the original data while providing privacy guarantees. We use these synthetic data as a surrogate for the original data. We find that while some existing definitions of privacy are inapplicable to our target application, others are too conservative and render the synthetic data useless since they guard against privacy breaches that are very unlikely. Moreover, the data in our target application is sparse, and none of the existing solutions are tailored to anonymize sparse data. In this paper, we propose solutions to address the above issues.",
"Advances in information technology, and its use in research, are increasing both the need for anonymized data and the risks of poor anonymization. We present a metric, δ-presence, that clearly links the quality of anonymization to the risk posed by inadequate anonymization. We show that existing anonymization techniques are inappropriate for situations where δ-presence is a good metric (specifically, where knowing an individual is in the database poses a privacy risk), and present algorithms for effectively anonymizing to meet δ-presence. The algorithms are evaluated in the context of a real-world scenario, demonstrating practical applicability of the approach.",
"",
"We consider the privacy problem in data publishing: given a database instance containing sensitive information \"anonymize\" it to obtain a view such that, on one hand attackers cannot learn any sensitive information from the view, and on the other hand legitimate users can use it to compute useful statistics. These are conflicting goals. In this paper we prove an almost crisp separation of the case when a useful anonymization algorithm is possible from when it is not, based on the attacker's prior knowledge. Our definition of privacy is derived from existing literature and relates the attacker's prior belief for a given tuple t, with the posterior belief for the same tuple. Our definition of utility is based on the error bound on the estimates of counting queries. The main result has two parts. First we show that if the prior beliefs for some tuples are large then there exists no useful anonymization algorithm. Second, we show that when the prior is bounded for all tuples then there exists an anonymization algorithm that is both private and useful. The anonymization algorithm that forms our positive result is novel, and improves the privacy utility tradeoff of previously known algorithms with privacy utility guarantees such as FRAPP.",
"Given a dataset containing sensitive personal information, a statistical database answers aggregate queries in a manner that preserves individual privacy. We consider the problem of constructing a statistical database using output perturbation, which protects privacy by injecting a small noise into each query result. We show that the state-of-the-art approach, e-differential privacy, suffers from two severe deficiencies: it (i) incurs prohibitive computation overhead, and (ii) can answer only a limited number of queries, after which the statistical database has to be shut down. To remedy the problem, we develop a new technique that enforces e-different privacy with economical cost. Our technique also incorporates a query relaxation mechanism, which removes the restriction on the number of permissible queries. The effectiveness and efficiency of our solution are verified through experiments with real data."
]
}
|
0909.2290
|
2949363019
|
Several anonymization techniques, such as generalization and bucketization, have been designed for privacy preserving microdata publishing. Recent work has shown that generalization loses considerable amount of information, especially for high-dimensional data. Bucketization, on the other hand, does not prevent membership disclosure and does not apply for data that do not have a clear separation between quasi-identifying attributes and sensitive attributes. In this paper, we present a novel technique called slicing, which partitions the data both horizontally and vertically. We show that slicing preserves better data utility than generalization and can be used for membership disclosure protection. Another important advantage of slicing is that it can handle high-dimensional data. We show how slicing can be used for attribute disclosure protection and develop an efficient algorithm for computing the sliced data that obey the l-diversity requirement. Our workload experiments confirm that slicing preserves better utility than generalization and is more effective than bucketization in workloads involving the sensitive attribute. Our experiments also demonstrate that slicing can be used to prevent membership disclosure.
|
Finally, privacy measures for attribute disclosure protection include @math -diversity @cite_35 , @math -anonymity @cite_0 , @math -closeness @cite_7 , @math -anonymity @cite_11 , @math -safety @cite_15 , privacy skyline @cite_33 , @math -confidentiality @cite_24 and @math -anonymity @cite_22 . We use @math -diversity in slicing for attribute disclosure protection.
|
{
"cite_N": [
"@cite_35",
"@cite_33",
"@cite_7",
"@cite_22",
"@cite_0",
"@cite_24",
"@cite_15",
"@cite_11"
],
"mid": [
"2136114025",
"2116416325",
"",
"2163882872",
"2009331946",
"2140096141",
"2099907571",
"2166658587"
],
"abstract": [
"The k-anonymity privacy requirement for publishing microdata requires that each equivalence class (i.e., a set of records that are indistinguishable from each other with respect to certain \"identifying\" attributes) contains at least k records. Recently, several authors have recognized that k-anonymity cannot prevent attribute disclosure. The notion of l-diversity has been proposed to address this; l-diversity requires that each equivalence class has at least l well-represented values for each sensitive attribute. In this paper we show that l-diversity has a number of limitations. In particular, it is neither necessary nor sufficient to prevent attribute disclosure. We propose a novel privacy notion called t-closeness, which requires that the distribution of a sensitive attribute in any equivalence class is close to the distribution of the attribute in the overall table (i.e., the distance between the two distributions should be no more than a threshold t). We choose to use the earth mover distance measure for our t-closeness requirement. We discuss the rationale for t-closeness and illustrate its advantages through examples and experiments.",
"Privacy is an important issue in data publishing. Many organizations distribute non-aggregate personal data for research, and they must take steps to ensure that an adversary cannot predict sensitive information pertaining to individuals with high confidence. This problem is further complicated by the fact that, in addition to the published data, the adversary may also have access to other resources (e.g., public records and social networks relating individuals), which we call external knowledge. A robust privacy criterion should take this external knowledge into consideration. In this paper, we first describe a general framework for reasoning about privacy in the presence of external knowledge. Within this framework, we propose a novel multidimensional approach to quantifying an adversary's external knowledge. This approach allows the publishing organization to investigate privacy threats and enforce privacy requirements in the presence of various types and amounts of external knowledge. Our main technical contributions include a multidimensional privacy criterion that is more intuitive and flexible than previous approaches to modeling background knowledge. In addition, we provide algorithms for measuring disclosure and sanitizing data that improve computational efficiency several orders of magnitude over the best known techniques.",
"",
"We identify proximity breach as a privacy threat specific to numerical sensitive attributes in anonymized data publication. Such breach occurs when an adversary concludes with high confidence that the sensitive value of a victim individual must fall in a short interval --- even though the adversary may have low confidence about the victim's actual value. None of the existing anonymization principles (e.g., k-anonymity, l-diversity, etc.) can effectively prevent proximity breach. We remedy the problem by introducing a novel principle called (e, m)-anonymity. Intuitively, the principle demands that, given a QI-group G, for every sensitive value x in G, at most 1 m of the tuples in G can have sensitive values \"similar\" to x, where the similarity is controlled by e. We provide a careful analytical study of the theoretical characteristics of (e, m)-anonymity, and the corresponding generalization algorithm. Our findings are verified by experiments with real data.",
"Privacy preservation is an important issue in the release of data for mining purposes. The k-anonymity model has been introduced for protecting individual identification. Recent studies show that a more sophisticated model is necessary to protect the association of individuals to sensitive information. In this paper, we propose an (α, k)-anonymity model to protect both identifications and relationships to sensitive information in data. We discuss the properties of (α, k)-anonymity model. We prove that the optimal (α, k)-anonymity problem is NP-hard. We first presentan optimal global-recoding method for the (α, k)-anonymity problem. Next we propose a local-recoding algorithm which is more scalable and result in less data distortion. The effectiveness and efficiency are shown by experiments. We also describe how the model can be extended to more general case.",
"Data publishing generates much concern over the protection of individual privacy. Recent studies consider cases where the adversary may possess different kinds of knowledge about the data. In this paper, we show that knowledge of the mechanism or algorithm of anonymization for data publication can also lead to extra information that assists the adversary and jeopardizes individual privacy. In particular, all known mechanisms try to minimize information loss and such an attempt provides a loophole for attacks. We call such an attack a minimality attack. In this paper, we introduce a model called m-confidentiality which deals with minimality attacks, and propose a feasible solution. Our experiments show that minimality attacks are practical concerns on real datasets and that our algorithm can prevent such attacks with very little overhead and information loss.",
"Recent work has shown the necessity of considering an attacker's background knowledge when reasoning about privacy in data publishing. However, in practice, the data publisher does not know what background knowledge the attacker possesses. Thus, it is important to consider the worst-case. In this paper, we initiate a formal study of worst-case background knowledge. We propose a language that can express any background knowledge about the data. We provide a polynomial time algorithm to measure the amount of disclosure of sensitive information in the worst case, given that the attacker has at most k pieces of information in this language. We also provide a method to efficiently sanitize the data so that the amount of disclosure in the worst case is less than a specified threshold.",
"Privacy is a serious concern when microdata need to be released for ad hoc analyses. The privacy goals of existing privacy protection approaches (e.g., k-anonymity and l-diversity) are suitable only for categorical sensitive attributes. Since applying them directly to numerical sensitive attributes (e.g., salary) may result in undesirable information leakage, we propose privacy goals to better capture the need of privacy protection for numerical sensitive attributes. Complementing the desire for privacy is the need to support ad hoc aggregate analyses over microdata. Existing generalization-based anonymization approaches cannot answer aggregate queries with reasonable accuracy. We present a general framework of permutation-based anonymization to support accurate answering of aggregate queries and show that, for the same grouping, permutation-based techniques can always answer aggregate queries more accurately than generalization-based approaches. We further propose several criteria to optimize permutations for accurate answering of aggregate queries, and develop efficient algorithms for each criterion."
]
}
|
0909.0892
|
2949585032
|
We consider auctions in which greedy algorithms, paired with first-price or critical-price payment rules, are used to resolve multi-parameter combinatorial allocation problems. We study the price of anarchy for social welfare in such auctions. We show for a variety of equilibrium concepts, including Bayes-Nash equilibrium and correlated equilibrium, the resulting price of anarchy bound is close to the approximation factor of the underlying greedy algorithm.
|
The BNE solution concept was recently applied to submodular combinatorial auctions @cite_12 , where it was shown that a randomized mechanism can attain a @math -approximation at any mixed equilibrium assuming that bidders are ex-post individually rational. Pure equilibria of first-price mechanisms have also been studied for path procurement auctions @cite_1 . Performance at Nash equilibrium has been extensively studied in the economics literature (see Jackson @cite_13 for a survey) and recently in work on Internet advertising slot auctions @cite_8 @cite_23 . In that line of research the goal is often revenue maximization, rather than social welfare maximization, and traditionally one wishes to implement a particular optimal allocation rule, rather than guarantee a given approximation ratio.
|
{
"cite_N": [
"@cite_8",
"@cite_1",
"@cite_23",
"@cite_13",
"@cite_12"
],
"mid": [
"",
"2127595535",
"1975392791",
"1977483205",
"1489333206"
],
"abstract": [
"",
"We study first-price auction mechanisms for auctioning flow between given nodes in a graph. A first-price auction is any auction in which links on winning paths are paid their bid amount; the designer has flexibility in specifying remaining details. We assume edges are independent agents with fixed capacities and costs, and their objective is to maximize their profit. We characterize all strong e-Nash equilibria of a first-price auction, and show that the total payment is never significantly more than, and often less than, the well known dominant strategy Vickrey-Clark-Groves mechanism. We then present a randomized version of the first-price auction for which the equilibrium condition can be relaxed to e-Nash equilibrium. We next consider a model in which the amount of demand is uncertain, but its probability distribution is known. For this model, we show that a simple ex ante first-price auction may not have any e-Nash equilibria. We then present a modified mechanism with 2-parameter bids which does have an e-Nash equilibrium. For a randomized version of this 2-parameter mechanism we characterize the set of all eNEs and prove a bound on the total payment in any eNE.",
"We investigate the \"generalized second price\" auction (GSP), a new mechanism which is used by search engines to sell online advertising that most Internet users encounter daily. GSP is tailored to its unique environment, and neither the mechanism nor the environment have previously been studied in the mechanism design literature. Although GSP looks similar to the Vickrey-Clarke-Groves (VCG) mechanism, its properties are very different. In particular, unlike the VCG mechanism, GSP generally does not have an equilibrium in dominant strategies, and truth-telling is not an equilibrium of GSP. To analyze the properties of GSP in a dynamic environment, we describe the generalized English auction that corresponds to the GSP and show that it has a unique equilibrium. This is an ex post equilibrium that results in the same payoffs to all players as the dominant strategy equilibrium of VCG.",
"This paper is meant to familiarize the audience with some of the fundamental results in the theory of implementation and provide a quick progression to some open questions in the literature.",
"We study the following Bayesian setting: mitems are sold to nselfish bidders in mindependent second-price auctions. Each bidder has a privatevaluation function that expresses complex preferences over allsubsets of items. Bidders only have beliefsabout the valuation functions of the other bidders, in the form of probability distributions. The objective is to allocate the items to the bidders in a way that provides a good approximation to the optimal social welfare value. We show that if bidders have submodular valuation functions, then every Bayesian Nash equilibrium of the resulting game provides a 2-approximation to the optimal social welfare. Moreover, we show that in the full-information game a pure Nash always exists and can be found in time that is polynomial in both mand n."
]
}
|
0909.0109
|
2950098792
|
Topological information are the most important kind of qualitative spatial information. Current formalisms for the topological aspect of space focus on relations between regions or properties of regions. This work provides a qualitative model for representing the topological internal structure of complex regions, which could be of multiple pieces and or have holes and islands to any finite level. We propose a layered graph model for representing the internal structure of complex plane regions, where each node represents the closure of a connected component of the interior or the exterior of a complex region. The model provides a complete representation in the sense that the (global) nine-intersections between the interiors, the boundaries, and the exteriors of two complex regions can be determined by the (local) RCC8 topological relations between the associated simple regions. Moreover, this graph model has an inherent hierarchy which could be exploited for map generalization.
|
In this section, we give a detailed comparison of our work with related works. Section 6.1 compares related works on various kinds of complex regions. The tree model of @cite_13 is closely related to our link graph model. Section 6.2 then compares our model with the tree model. We show in Section 6.3 how our graph model can be used to produce representations of bounded regions at multiple levels of detail.
|
{
"cite_N": [
"@cite_13"
],
"mid": [
"1603434919"
],
"abstract": [
"This study aims to model an appropriate set of 2-dimensional spatial objects (i.e. areas) embedded in R2 with the usual metric and topology. The set of objects to be modelled is an extension of the set of 2-dimensional objects which can be represented within the vector-based data model. The model aims to capture explicitly some important topological properties of the spatial objects, e.g. connectedness and region inclusion. The construction discussed in this paper is capable of representing a large class of areal objects, including objects with holes which have islands (to any finite level). It has the virtue of being canonical, in the sense that any appropriate areal object has a unique representation in this model. The paper describes the model by specifying the areal objects under consideration and providing their representation. It also defines a set of operations and discusses algorithms for their implementation."
]
}
|
0909.0109
|
2950098792
|
Topological information are the most important kind of qualitative spatial information. Current formalisms for the topological aspect of space focus on relations between regions or properties of regions. This work provides a qualitative model for representing the topological internal structure of complex regions, which could be of multiple pieces and or have holes and islands to any finite level. We propose a layered graph model for representing the internal structure of complex plane regions, where each node represents the closure of a connected component of the interior or the exterior of a complex region. The model provides a complete representation in the sense that the (global) nine-intersections between the interiors, the boundaries, and the exteriors of two complex regions can be determined by the (local) RCC8 topological relations between the associated simple regions. Moreover, this graph model has an inherent hierarchy which could be exploited for map generalization.
|
The notion of simple region with holes and its generalized region are first introduced in @cite_3 . Our definition of simple region with holes is slightly different from that given in @cite_3 , but is consistent with the one in @cite_15 .
|
{
"cite_N": [
"@cite_15",
"@cite_3"
],
"mid": [
"1997862802",
"5053679"
],
"abstract": [
"For a long time topological relationships between spatial objects have been a focus of research in a number of disciplines like artificial intelligence, cognitive science, linguistics, robotics, and spatial reasoning. Especially as predicates they support the design of suitable query languages for spatial data retrieval and analysis in spatial databases and geographical information systems (GIS). Unfortunately, they have so far only been defined for and applicable to simplified abstractions of spatial objects like single points, continuous lines, and simple regions. With the introduction of complex spatial data types an issue arises regarding the design, definition, and number of topological relationships operating on these complex types. This article closes this gap and first introduces definitions of general and versatile spatial data types for complex points, complex lines, and complex regions. Based on the well known 9-intersection model, it then determines the complete sets of mutually exclusive topological relationships for all type combinations. Completeness and mutual exclusion are shown by a proof technique called proof-by-constraint-and-drawing. Due to the resulting large numbers of predicates and the difficulty of handling them, the user is provided with the concepts of topological cluster predicates and topological predicate groups, which permit one to reduce the number of predicates to be dealt with in a user-defined and or application-specific manner.",
"The 4-intersection, a model for the representation of topological relations between 2-dimensional objects with connected boundaries and connected interiors, is extended to cover topological relations between 2-dimensional objects with arbitrary holes, called regions with holes. Each region with holes is represented by its generalized region—the union of the object and its holes — and the closure of each hole. The topological relation between two regions with holes, A and B, is described by the set of all individual topological relations between (1) A ’s generalized region and B’s generalized region, (2) A ’s generalized region and each of B’s holes, (3) B’s generalized region with each of A ’s holes, and (4) each of A ’s holes with each of B’s holes. As a side product, the same formalism applies to the description of topological relations between 1-spheres. An algorithm is developed that minimizes the number of individual topological relations necessary to describe a configuration completely. This model of representing complex topological relations is suitable for a multi-level treatment of topological relations, at the least detailed level of which the relation between the generalized regions prevails. It is shown how this model applies to the assessment of consistency in multiple representations when, at a coarser level of less detail, regions are generalized by dropping holes."
]
}
|
0909.0109
|
2950098792
|
Topological information are the most important kind of qualitative spatial information. Current formalisms for the topological aspect of space focus on relations between regions or properties of regions. This work provides a qualitative model for representing the topological internal structure of complex regions, which could be of multiple pieces and or have holes and islands to any finite level. We propose a layered graph model for representing the internal structure of complex plane regions, where each node represents the closure of a connected component of the interior or the exterior of a complex region. The model provides a complete representation in the sense that the (global) nine-intersections between the interiors, the boundaries, and the exteriors of two complex regions can be determined by the (local) RCC8 topological relations between the associated simple regions. Moreover, this graph model has an inherent hierarchy which could be exploited for map generalization.
|
@cite_3 actually proposed two definitions of simple regions with holes, both requiring that the region have connected interior. The first definition further requires that the boundaries of different exterior components should be disjoint. While this constraint is relaxed in the second definition, it allows spikes in the exterior of such a region. Figure shows such an example. The region' @math in the left of this figure has a spike (i.e. the common boundary of @math and @math ) in the exterior of @math , but its interior @math (showing in the right of this figure) is one-piece.
|
{
"cite_N": [
"@cite_3"
],
"mid": [
"5053679"
],
"abstract": [
"The 4-intersection, a model for the representation of topological relations between 2-dimensional objects with connected boundaries and connected interiors, is extended to cover topological relations between 2-dimensional objects with arbitrary holes, called regions with holes. Each region with holes is represented by its generalized region—the union of the object and its holes — and the closure of each hole. The topological relation between two regions with holes, A and B, is described by the set of all individual topological relations between (1) A ’s generalized region and B’s generalized region, (2) A ’s generalized region and each of B’s holes, (3) B’s generalized region with each of A ’s holes, and (4) each of A ’s holes with each of B’s holes. As a side product, the same formalism applies to the description of topological relations between 1-spheres. An algorithm is developed that minimizes the number of individual topological relations necessary to describe a configuration completely. This model of representing complex topological relations is suitable for a multi-level treatment of topological relations, at the least detailed level of which the relation between the generalized regions prevails. It is shown how this model applies to the assessment of consistency in multiple representations when, at a coarser level of less detail, regions are generalized by dropping holes."
]
}
|
0909.0109
|
2950098792
|
Topological information are the most important kind of qualitative spatial information. Current formalisms for the topological aspect of space focus on relations between regions or properties of regions. This work provides a qualitative model for representing the topological internal structure of complex regions, which could be of multiple pieces and or have holes and islands to any finite level. We propose a layered graph model for representing the internal structure of complex plane regions, where each node represents the closure of a connected component of the interior or the exterior of a complex region. The model provides a complete representation in the sense that the (global) nine-intersections between the interiors, the boundaries, and the exteriors of two complex regions can be determined by the (local) RCC8 topological relations between the associated simple regions. Moreover, this graph model has an inherent hierarchy which could be exploited for map generalization.
|
The notion of composite region is first introduced in @cite_6 , where two components are allowed to meet at more than one points. This implies that the region in Figure (c) is considered as a composite region. This is the only difference between ours and that of @cite_6 .
|
{
"cite_N": [
"@cite_6"
],
"mid": [
"2067629381"
],
"abstract": [
"Spatial data are at the core of many scientific information systems. The design of suitable query languages for spatial data retrieval and analysis is still an issue on the cutting edge of research. The primary requirement of these languages is to support spatial operators. Unfortunately, current systems support only simplified abstractions of geographic objects based on simple regions which are usually not sufficient to deal with the complexity of the geographic reality. Composite regions, which are regions made up of several components, are necessary to overcome those limits. The paper introduces a two-level formal model suitable for representing topological relationships among composite regions. The contribution gives the needed formal background for adding composite regions inside a spatial query language with the purpose of answering topological queries on complex geographic objects."
]
}
|
0909.0109
|
2950098792
|
Topological information are the most important kind of qualitative spatial information. Current formalisms for the topological aspect of space focus on relations between regions or properties of regions. This work provides a qualitative model for representing the topological internal structure of complex regions, which could be of multiple pieces and or have holes and islands to any finite level. We propose a layered graph model for representing the internal structure of complex plane regions, where each node represents the closure of a connected component of the interior or the exterior of a complex region. The model provides a complete representation in the sense that the (global) nine-intersections between the interiors, the boundaries, and the exteriors of two complex regions can be determined by the (local) RCC8 topological relations between the associated simple regions. Moreover, this graph model has an inherent hierarchy which could be exploited for map generalization.
|
@cite_15 defined a complex region as a bounded region @math which has finite components (called there) and required that the intersection of any two components is a finite set. They considered a complex region as a collection of faces. For example, the bounded region shown in Figure is interpreted as having seven faces: one has a hole, the other six are simple regions. How these faces are composed is not mentioned explicitly in their definition. As another example, the two complex regions in Figure have the same set of faces, but different link graphs and internal structures. When restricted to semi-algebraic sets, all bounded regions consider in this paper are complex regions in the sense of @cite_15 .
|
{
"cite_N": [
"@cite_15"
],
"mid": [
"1997862802"
],
"abstract": [
"For a long time topological relationships between spatial objects have been a focus of research in a number of disciplines like artificial intelligence, cognitive science, linguistics, robotics, and spatial reasoning. Especially as predicates they support the design of suitable query languages for spatial data retrieval and analysis in spatial databases and geographical information systems (GIS). Unfortunately, they have so far only been defined for and applicable to simplified abstractions of spatial objects like single points, continuous lines, and simple regions. With the introduction of complex spatial data types an issue arises regarding the design, definition, and number of topological relationships operating on these complex types. This article closes this gap and first introduces definitions of general and versatile spatial data types for complex points, complex lines, and complex regions. Based on the well known 9-intersection model, it then determines the complete sets of mutually exclusive topological relationships for all type combinations. Completeness and mutual exclusion are shown by a proof technique called proof-by-constraint-and-drawing. Due to the resulting large numbers of predicates and the difficulty of handling them, the user is provided with the concepts of topological cluster predicates and topological predicate groups, which permit one to reduce the number of predicates to be dealt with in a user-defined and or application-specific manner."
]
}
|
0909.0109
|
2950098792
|
Topological information are the most important kind of qualitative spatial information. Current formalisms for the topological aspect of space focus on relations between regions or properties of regions. This work provides a qualitative model for representing the topological internal structure of complex regions, which could be of multiple pieces and or have holes and islands to any finite level. We propose a layered graph model for representing the internal structure of complex plane regions, where each node represents the closure of a connected component of the interior or the exterior of a complex region. The model provides a complete representation in the sense that the (global) nine-intersections between the interiors, the boundaries, and the exteriors of two complex regions can be determined by the (local) RCC8 topological relations between the associated simple regions. Moreover, this graph model has an inherent hierarchy which could be exploited for map generalization.
|
The Worboys-Bofakos tree model is perhaps the most related one to ours. Suppose @math is a bounded region that has a finite representation. In @cite_13 , each region @math is represented as a tree, where the root node represents, in our terms, the generalized region of @math , and each non-root node represents a simple region. If @math is a child node of @math , then @math is contained in @math . All child nodes of @math form a composite region (called a in @cite_13 ), which may separate @math into pieces.
|
{
"cite_N": [
"@cite_13"
],
"mid": [
"1603434919"
],
"abstract": [
"This study aims to model an appropriate set of 2-dimensional spatial objects (i.e. areas) embedded in R2 with the usual metric and topology. The set of objects to be modelled is an extension of the set of 2-dimensional objects which can be represented within the vector-based data model. The model aims to capture explicitly some important topological properties of the spatial objects, e.g. connectedness and region inclusion. The construction discussed in this paper is capable of representing a large class of areal objects, including objects with holes which have islands (to any finite level). It has the virtue of being canonical, in the sense that any appropriate areal object has a unique representation in this model. The paper describes the model by specifying the areal objects under consideration and providing their representation. It also defines a set of operations and discusses algorithms for their implementation."
]
}
|
0909.0633
|
2952548598
|
Fractional Brownian motion (fBm) emerged as a useful model for self-similar and long-range dependent Internet traffic. Approximate performance measures are known from large deviations theory for single queuing systems with fBm through traffic. In this paper we derive end-to-end performance bounds for a through flow in a network of tandem queues under fBm cross traffic. To this end, we prove a rigorous sample path envelope for fBm that complements previous approximate results. We find that both approaches agree in their outcome that overflow probabilities for fBm traffic have a Weibullian tail. We employ the sample path envelope and the concept of leftover service curves to model the remaining service after scheduling fBm cross traffic at a system. Using composition results for tandem systems from the stochastic network calculus we derive end-to-end statistical performance bounds for individual flows in networks under fBm cross traffic. We discover that these bounds grow in O(n (log n)^(1 (2-2H))) for n systems in series where H is the Hurst parameter of the fBm cross traffic. We show numerical results on the impact of the variability and the correlation of fBm traffic on network performance.
|
The backlog process @math at a lossless work-conserving constant rate server with capacity @math is described by Reich's equation, see e.g. @cite_10 , The difficulty behind the analysis of a statistical bound @math for the steady state backlog @math , i.e. letting @math , is to find the value @math that achieves the supremum in since @math is a random variable, see @cite_21 for explanation.
|
{
"cite_N": [
"@cite_21",
"@cite_10"
],
"mid": [
"1987582408",
"1497519142"
],
"abstract": [
"This paper establishes a link between two principal tools for the analysis of network traffic, namely, effective bandwidth and network calculus. It is shown that a general version of effective bandwidth can be expressed within the framework of a probabilistic version of the network calculus, where both arrivals and service are specified in terms of probabilistic bounds. By formulating well-known effective bandwidth expressions in terms of probabilistic envelope functions, the developed network calculus can be applied to a wide range of traffic types, including traffic that has self-similar characteristics. As applications, probabilistic lower bounds are presented on the service given by three different scheduling algorithms: static priority, earliest deadline first, and generalized processor sharing. Numerical examples show the impact of specific traffic models and scheduling algorithms on the multiplexing gain in a network.",
"The viewpoint is that communication networking is about efficient resource sharing. The focus is on the three building blocks of communication networking, namely, multiplexing, switching and routing. The approach is analytical, with the discussion being driven by mathematical analyses of and solutions to specific engineering problems. The result? A comprehensive, effectively organized treatment of core engineering issues in communication networking. Written for both the networking professional and for the classroom, this book covers fundamental concepts in detail and places design issues in context by drawing on real world examples from current technologies. ·Systematically uses mathematical models and analyses to drive the development of a practical understanding of core network engineering problems. ·Provides in-depth coverage of many current topics, including network calculus with deterministically-constrained traffic, congestion control for elastic traffic, packet switch queuing, switching architectures, virtual path routing, and routing for quality of service. ·Includes over 200 hands-on exercises and class-tested problems, dozens of schematic figures, a review of key mathematical concepts, and a glossary."
]
}
|
0909.0633
|
2952548598
|
Fractional Brownian motion (fBm) emerged as a useful model for self-similar and long-range dependent Internet traffic. Approximate performance measures are known from large deviations theory for single queuing systems with fBm through traffic. In this paper we derive end-to-end performance bounds for a through flow in a network of tandem queues under fBm cross traffic. To this end, we prove a rigorous sample path envelope for fBm that complements previous approximate results. We find that both approaches agree in their outcome that overflow probabilities for fBm traffic have a Weibullian tail. We employ the sample path envelope and the concept of leftover service curves to model the remaining service after scheduling fBm cross traffic at a system. Using composition results for tandem systems from the stochastic network calculus we derive end-to-end statistical performance bounds for individual flows in networks under fBm cross traffic. We discover that these bounds grow in O(n (log n)^(1 (2-2H))) for n systems in series where H is the Hurst parameter of the fBm cross traffic. We show numerical results on the impact of the variability and the correlation of fBm traffic on network performance.
|
Large deviations theory is frequently used to analyze the asymptotic decay rate of the overflow probability of a backlog bound @cite_7 @cite_29 @cite_23 @cite_27 . The asymptotic calculation makes use of the principle of the largest term stating where the term on the right hand side strictly provides only a lower bound. For fBm traffic ) at a server with capacity @math the following asymptotic holds for the decay rate of the overflow probability @cite_29 It follows that @math where @cite_29 The resulting overflow probability has a Weibull tail that simplifies to an exponential distribution for the special case @math . The backlog bound was proven to be logarithmically asymptotical generally and exact for @math .
|
{
"cite_N": [
"@cite_27",
"@cite_29",
"@cite_23",
"@cite_7"
],
"mid": [
"1978905175",
"2129337887",
"1501276433",
"2104045250"
],
"abstract": [
"From the Publisher: Providing performance guarantees is one of the most important issues for future telecommunication networks. This book describes theoretical developments in performance guarantees for telecommunication networks from the last decade. Written for the benefit of graduate students and scientists interested in telecommunications-network performance this book consists of two parts.",
"We consider queueing systems where the workload process is assumed to have an associated large deviation principle with arbitrary scaling: there exist increasing scaling functions (at,vt,teE+) and a rate function such that if (Wt,teR+) denotes the workload process, then limi;^ 1 (Wt at > w) = — I(w)",
"",
"We consider the standard single-server queue with unlimited waiting space and the first-in first-out service discipline, but without any explicit independence conditions on the interarrival and service times. We find conditions for the steady-state waiting-time distribution to have asymptotics of the form x-1 log P(W > x) -+ -0* as x - o0 for 0* > 0. We require only stationarity of the basic sequence of service times minus interarrival times and a Girtner-Ellis condition for the cumulant generating function of the associated partial sums, i.e. n-1 log Eexp (OSn) --+ (0) as n - oo, plus regularity conditions on the decay rate function 0. The asymptotic decay rate 0* is the root of the equation 0(0) = 0. This result in turn implies a corresponding asymptotic result for the steady-state workload in a queue with general non-decreasing input. This asymptotic result covers the case of multiple independent sources, so that it provides additional theoretical support for a concept of effective bandwidths for admission control in multiclass queues based on asymptotic decay rates."
]
}
|
0909.0633
|
2952548598
|
Fractional Brownian motion (fBm) emerged as a useful model for self-similar and long-range dependent Internet traffic. Approximate performance measures are known from large deviations theory for single queuing systems with fBm through traffic. In this paper we derive end-to-end performance bounds for a through flow in a network of tandem queues under fBm cross traffic. To this end, we prove a rigorous sample path envelope for fBm that complements previous approximate results. We find that both approaches agree in their outcome that overflow probabilities for fBm traffic have a Weibullian tail. We employ the sample path envelope and the concept of leftover service curves to model the remaining service after scheduling fBm cross traffic at a system. Using composition results for tandem systems from the stochastic network calculus we derive end-to-end statistical performance bounds for individual flows in networks under fBm cross traffic. We discover that these bounds grow in O(n (log n)^(1 (2-2H))) for n systems in series where H is the Hurst parameter of the fBm cross traffic. We show numerical results on the impact of the variability and the correlation of fBm traffic on network performance.
|
The large deviations result ) agrees with a solution deduced for the largest term ) in @cite_26 @cite_34 . The derivation makes use of the Gaussian distribution of the increments of fBm and yields the approximation where @math is the complementary cumulative distribution function of a Gaussian random variable, i.e. the increment of fBm. After maximizing over @math the backlog bound approximation in @cite_26 is @math where @math is identical to ). A comprehensible introduction covering the derivation of this bound can also be found in @cite_0 .
|
{
"cite_N": [
"@cite_0",
"@cite_26",
"@cite_34"
],
"mid": [
"",
"1978061253",
"2093590637"
],
"abstract": [
"",
"A storage model with self-similar input process is studied. A relation coupling together the storage requirement, the achievable utilization and the output rate is derived. A lower bound for the complementary distribution function of the storage level is given.",
"An abstract model for aggregated connectionless traffic, based on the fractional Brownian motion, is presented. Insight into the parameters is obtained by relating the model to an equivalent burst model. Results on a corresponding storage process are presented. The buffer occupancy distribution is approximated by a Weibull distribution. The model is compared with publicly available samples of real Ethernet traffic. The degree of the short-term predictability of the traffic model is studied through an exact formula for the conditional variance of a future value given the past. The applicability and interpretation of the self-similar model are discussed extensively, and the notion of ideal free traffic is introduced. >"
]
}
|
0909.0633
|
2952548598
|
Fractional Brownian motion (fBm) emerged as a useful model for self-similar and long-range dependent Internet traffic. Approximate performance measures are known from large deviations theory for single queuing systems with fBm through traffic. In this paper we derive end-to-end performance bounds for a through flow in a network of tandem queues under fBm cross traffic. To this end, we prove a rigorous sample path envelope for fBm that complements previous approximate results. We find that both approaches agree in their outcome that overflow probabilities for fBm traffic have a Weibullian tail. We employ the sample path envelope and the concept of leftover service curves to model the remaining service after scheduling fBm cross traffic at a system. Using composition results for tandem systems from the stochastic network calculus we derive end-to-end statistical performance bounds for individual flows in networks under fBm cross traffic. We discover that these bounds grow in O(n (log n)^(1 (2-2H))) for n systems in series where H is the Hurst parameter of the fBm cross traffic. We show numerical results on the impact of the variability and the correlation of fBm traffic on network performance.
|
The proof of the large deviations theory builds on the G "artner-Ellis condition which establishes a direct relation to the effective bandwidth of a traffic flow @cite_29 . The theory of effective bandwidths, e.g. @cite_11 @cite_27 , is a major tool for the analysis of traffic flows as it gives a measure for resource requirements at different time scales. The effective bandwidth of a flow @math lies between its average and peak rate depending on the parameter @math . For fBm it holds that In case of @math the effective bandwidth of fBm traffic exhibits a continuous growth in @math due to LRD @cite_11 .
|
{
"cite_N": [
"@cite_27",
"@cite_29",
"@cite_11"
],
"mid": [
"1978905175",
"2129337887",
""
],
"abstract": [
"From the Publisher: Providing performance guarantees is one of the most important issues for future telecommunication networks. This book describes theoretical developments in performance guarantees for telecommunication networks from the last decade. Written for the benefit of graduate students and scientists interested in telecommunications-network performance this book consists of two parts.",
"We consider queueing systems where the workload process is assumed to have an associated large deviation principle with arbitrary scaling: there exist increasing scaling functions (at,vt,teE+) and a rate function such that if (Wt,teR+) denotes the workload process, then limi;^ 1 (Wt at > w) = — I(w)",
""
]
}
|
0909.0633
|
2952548598
|
Fractional Brownian motion (fBm) emerged as a useful model for self-similar and long-range dependent Internet traffic. Approximate performance measures are known from large deviations theory for single queuing systems with fBm through traffic. In this paper we derive end-to-end performance bounds for a through flow in a network of tandem queues under fBm cross traffic. To this end, we prove a rigorous sample path envelope for fBm that complements previous approximate results. We find that both approaches agree in their outcome that overflow probabilities for fBm traffic have a Weibullian tail. We employ the sample path envelope and the concept of leftover service curves to model the remaining service after scheduling fBm cross traffic at a system. Using composition results for tandem systems from the stochastic network calculus we derive end-to-end statistical performance bounds for individual flows in networks under fBm cross traffic. We discover that these bounds grow in O(n (log n)^(1 (2-2H))) for n systems in series where H is the Hurst parameter of the fBm cross traffic. We show numerical results on the impact of the variability and the correlation of fBm traffic on network performance.
|
In @cite_21 a connection between effective bandwidths and effective envelopes is established. In contrast to asymptotic results for large buffers from large deviations theory, effective envelopes in conjunction with the stochastic network calculus @cite_5 @cite_33 @cite_21 @cite_14 @cite_30 @cite_1 can provide non-asymptotic performance bounds. Moreover, recent stochastic network calculus provides methods for derivation of stochastic leftover service curves as well as for composition of tandem systems. Effective envelopes @math are statistical upper bounds of the cumulative arrivals @math of the form An envelope for fBm traffic is derived in @cite_8 @cite_35 @cite_21 as
|
{
"cite_N": [
"@cite_30",
"@cite_35",
"@cite_14",
"@cite_33",
"@cite_8",
"@cite_21",
"@cite_1",
"@cite_5"
],
"mid": [
"2158352245",
"1970562937",
"2149650042",
"2022940918",
"2158018483",
"1987582408",
"1589801689",
""
],
"abstract": [
"The stochastic network calculus is an evolving new methodology for backlog and delay analysis of networks that can account for statistical multiplexing gain. This paper advances the stochastic network calculus by deriving a network service curve, which expresses the service given to a flow by the network as a whole in terms of a probabilistic bound. The presented network service curve permits the calculation of statistical end-to-end delay and backlog bounds for broad classes of arrival and service distributions. The benefits of the derived service curve are illustrated for the exponentially bounded burstiness (EBB) traffic model. It is shown that end-to-end performance measures computed with a network service curve are bounded by spl Oscr (H log H), where H is the number of nodes traversed by a flow. Using currently available techniques, which compute end-to-end bounds by adding single node results, the corresponding performance measures are bounded by spl Oscr (H sup 3 ).",
"This article presents a method for the computation of the equivalent bandwidth of an aggregate of heterogeneous self-similar sources, as well as the time scales of interest for queueing systems fed by a fractal Brownian motion (fBm) process. Moreover, the fractal leaky bucket, a novel policing mechanism capable of accurately monitoring self-similar sources, is introduced.",
"The network calculus offers an elegant framework for determining worst-case bounds on delay and backlog in a network. This paper extends the network calculus to a probabilistic framework with statistical service guarantees. The notion of a statistical service curve is presented as a probabilistic bound on the service received by an individual flow or an aggregate of flows. The problem of concatenating per-node statistical service curves to form an end-to-end (network) statistical service curve is explored. Two solution approaches are presented that can each yield statistical network service curves. The first approach requires the availability of time scale bounds at which arrivals and departures at each node are correlated. The second approach considers a service curve that describes service over time intervals. Although the latter description of service is less general, it is argued that many practically relevant service curves may be compliant to this description",
"A network calculus is developed for processes whose burstiness is stochastically bounded by general decreasing functions. This calculus is useful for a large class of input processes, including important processes exhibiting \"subexponentially bounded burstiness\" such as fractional Brownian motion. Moreover, it allows judicious capture of the salient features of real-time traffic, such as the \"cell\" and \"burst\" characteristics of multiplexed traffic. This accurate characterization is achieved by setting the bounding function as a sum of exponentials.",
"Several types of network traffic have been shown to exhibit long-range dependence (LRD). In this work, we show that the busy period of an ATM system driven by a long-range dependent process can be very large. We introduce a new traffic model based on a fractional Brownian motion envelope process. We show that this characterization can be used to predict queueing dynamics. Furthermore, we derive a new framework for computing delay bounds in ATM networks based on this traffic model. We show that it agrees with results given by large deviation theory with less computational complexity.",
"This paper establishes a link between two principal tools for the analysis of network traffic, namely, effective bandwidth and network calculus. It is shown that a general version of effective bandwidth can be expressed within the framework of a probabilistic version of the network calculus, where both arrivals and service are specified in terms of probabilistic bounds. By formulating well-known effective bandwidth expressions in terms of probabilistic envelope functions, the developed network calculus can be applied to a wide range of traffic types, including traffic that has self-similar characteristics. As applications, probabilistic lower bounds are presented on the service given by three different scheduling algorithms: static priority, earliest deadline first, and generalized processor sharing. Numerical examples show the impact of specific traffic models and scheduling algorithms on the multiplexing gain in a network.",
"Network calculus, a theory dealing with queuing systems found in computer networks, focuses on performance guarantees. The development of an information theory for stochastic service-guarantee analysis has been identified as a grand challenge for future networking research. Towards that end, stochastic network calculus, the probabilistic version or generalization of the (deterministic) Network Calculus, has been recognized by researchers as a crucial step. Stochastic Network Calculus presents a comprehensive treatment for the state-of-the-art in stochastic service-guarantee analysis research and provides basic introductory material on the subject, as well as discusses the most recent research in the area. This helpful volume summarizes results for stochastic network calculus, which can be employed when designing computer networks to provide stochastic service guarantees. Features and Topics: Provides a solid introductory chapter, providing useful background knowledge Reviews fundamental concepts and results of deterministic network calculus Includes end-of-chapter problems, as well as summaries and bibliographic comments Defines traffic models and server models for stochastic network calculus Summarizes the basic properties of stochastic network calculus under different combinations of traffic and server models Highlights independent case analysis Discusses stochastic service guarantees under different scheduling disciplines Presents applications to admission control and traffic conformance study using the analysis results Offers an overall summary and some open research challenges for further study of the topic Key Topics: Queuing systems Performance analysis and guarantees Independent case analysis Traffic and server models Analysis of scheduling disciplines Generalized processor sharing Open research challenges Researchers and graduates in the area of performance evaluation of computer communication networks will benefit substantially from this comprehensive and easy-to-follow volume. Professionals will also find it a worthwhile reference text. Professor Yuming Jiang at the Norwegian University of Science and Technology (NTNU) has lectured using the material presented in this text since 2006. Dr Yong Liu works at the Optical Network Laboratory, National University of Singapore, where he researches QoS for optical communication networks and Metro Ethernet networks.",
""
]
}
|
0909.0633
|
2952548598
|
Fractional Brownian motion (fBm) emerged as a useful model for self-similar and long-range dependent Internet traffic. Approximate performance measures are known from large deviations theory for single queuing systems with fBm through traffic. In this paper we derive end-to-end performance bounds for a through flow in a network of tandem queues under fBm cross traffic. To this end, we prove a rigorous sample path envelope for fBm that complements previous approximate results. We find that both approaches agree in their outcome that overflow probabilities for fBm traffic have a Weibullian tail. We employ the sample path envelope and the concept of leftover service curves to model the remaining service after scheduling fBm cross traffic at a system. Using composition results for tandem systems from the stochastic network calculus we derive end-to-end statistical performance bounds for individual flows in networks under fBm cross traffic. We discover that these bounds grow in O(n (log n)^(1 (2-2H))) for n systems in series where H is the Hurst parameter of the fBm cross traffic. We show numerical results on the impact of the variability and the correlation of fBm traffic on network performance.
|
The definition of effective envelope is point-wise in the sense that it can be violated at each point in time with overflow probability @math . Applying the approximation by the largest term ) the envelope ) is used in @cite_35 to recover the backlog bound from large deviations theory ).
|
{
"cite_N": [
"@cite_35"
],
"mid": [
"1970562937"
],
"abstract": [
"This article presents a method for the computation of the equivalent bandwidth of an aggregate of heterogeneous self-similar sources, as well as the time scales of interest for queueing systems fed by a fractal Brownian motion (fBm) process. Moreover, the fractal leaky bucket, a novel policing mechanism capable of accurately monitoring self-similar sources, is introduced."
]
}
|
0909.0633
|
2952548598
|
Fractional Brownian motion (fBm) emerged as a useful model for self-similar and long-range dependent Internet traffic. Approximate performance measures are known from large deviations theory for single queuing systems with fBm through traffic. In this paper we derive end-to-end performance bounds for a through flow in a network of tandem queues under fBm cross traffic. To this end, we prove a rigorous sample path envelope for fBm that complements previous approximate results. We find that both approaches agree in their outcome that overflow probabilities for fBm traffic have a Weibullian tail. We employ the sample path envelope and the concept of leftover service curves to model the remaining service after scheduling fBm cross traffic at a system. Using composition results for tandem systems from the stochastic network calculus we derive end-to-end statistical performance bounds for individual flows in networks under fBm cross traffic. We discover that these bounds grow in O(n (log n)^(1 (2-2H))) for n systems in series where H is the Hurst parameter of the fBm cross traffic. We show numerical results on the impact of the variability and the correlation of fBm traffic on network performance.
|
In contrast, the derivation of performance bounds using the stochastic network calculus builds on sample path arguments, such as the backlog bound ), and requires a bound for @math for all @math , i.e. a sample path envelope of the form Such sample path envelopes are constructed in @cite_21 using Boole's inequality under the assumption of a time scale limit @math , i.e. by summing the constant point-wise overflow probabilities @math over all @math . The time scale in this context can be regarded as a constraint on the duration of busy periods. In case of fBm the duration of busy periods has, however, been found to grow extremely fast with @math @cite_8 .
|
{
"cite_N": [
"@cite_21",
"@cite_8"
],
"mid": [
"1987582408",
"2158018483"
],
"abstract": [
"This paper establishes a link between two principal tools for the analysis of network traffic, namely, effective bandwidth and network calculus. It is shown that a general version of effective bandwidth can be expressed within the framework of a probabilistic version of the network calculus, where both arrivals and service are specified in terms of probabilistic bounds. By formulating well-known effective bandwidth expressions in terms of probabilistic envelope functions, the developed network calculus can be applied to a wide range of traffic types, including traffic that has self-similar characteristics. As applications, probabilistic lower bounds are presented on the service given by three different scheduling algorithms: static priority, earliest deadline first, and generalized processor sharing. Numerical examples show the impact of specific traffic models and scheduling algorithms on the multiplexing gain in a network.",
"Several types of network traffic have been shown to exhibit long-range dependence (LRD). In this work, we show that the busy period of an ATM system driven by a long-range dependent process can be very large. We introduce a new traffic model based on a fractional Brownian motion envelope process. We show that this characterization can be used to predict queueing dynamics. Furthermore, we derive a new framework for computing delay bounds in ATM networks based on this traffic model. We show that it agrees with results given by large deviation theory with less computational complexity."
]
}
|
0909.0633
|
2952548598
|
Fractional Brownian motion (fBm) emerged as a useful model for self-similar and long-range dependent Internet traffic. Approximate performance measures are known from large deviations theory for single queuing systems with fBm through traffic. In this paper we derive end-to-end performance bounds for a through flow in a network of tandem queues under fBm cross traffic. To this end, we prove a rigorous sample path envelope for fBm that complements previous approximate results. We find that both approaches agree in their outcome that overflow probabilities for fBm traffic have a Weibullian tail. We employ the sample path envelope and the concept of leftover service curves to model the remaining service after scheduling fBm cross traffic at a system. Using composition results for tandem systems from the stochastic network calculus we derive end-to-end statistical performance bounds for individual flows in networks under fBm cross traffic. We discover that these bounds grow in O(n (log n)^(1 (2-2H))) for n systems in series where H is the Hurst parameter of the fBm cross traffic. We show numerical results on the impact of the variability and the correlation of fBm traffic on network performance.
|
Methods for construction of sample path envelopes that do not require a priori assumptions on the relevant time scale have been developed in @cite_12 @cite_33 @cite_30 . The general approach is to use a point-wise envelope with parameter @math that has a decaying and integrable overflow profile @math , i.e. @math is finite. Typically, when constructing a sample path envelope @math is substituted by a slack rate @math . The slack rate relaxes the envelope such that @math decreases with increasing interval width @math . Finally, taking Boole's inequality over all @math to derive the sample path overflow probability @math ) translates to integrating @math that remains finite for all @math including @math .
|
{
"cite_N": [
"@cite_30",
"@cite_33",
"@cite_12"
],
"mid": [
"2158352245",
"2022940918",
"95741843"
],
"abstract": [
"The stochastic network calculus is an evolving new methodology for backlog and delay analysis of networks that can account for statistical multiplexing gain. This paper advances the stochastic network calculus by deriving a network service curve, which expresses the service given to a flow by the network as a whole in terms of a probabilistic bound. The presented network service curve permits the calculation of statistical end-to-end delay and backlog bounds for broad classes of arrival and service distributions. The benefits of the derived service curve are illustrated for the exponentially bounded burstiness (EBB) traffic model. It is shown that end-to-end performance measures computed with a network service curve are bounded by spl Oscr (H log H), where H is the number of nodes traversed by a flow. Using currently available techniques, which compute end-to-end bounds by adding single node results, the corresponding performance measures are bounded by spl Oscr (H sup 3 ).",
"A network calculus is developed for processes whose burstiness is stochastically bounded by general decreasing functions. This calculus is useful for a large class of input processes, including important processes exhibiting \"subexponentially bounded burstiness\" such as fractional Brownian motion. Moreover, it allows judicious capture of the salient features of real-time traffic, such as the \"cell\" and \"burst\" characteristics of multiplexed traffic. This accurate characterization is achieved by setting the bounding function as a sum of exponentials.",
""
]
}
|
0909.0633
|
2952548598
|
Fractional Brownian motion (fBm) emerged as a useful model for self-similar and long-range dependent Internet traffic. Approximate performance measures are known from large deviations theory for single queuing systems with fBm through traffic. In this paper we derive end-to-end performance bounds for a through flow in a network of tandem queues under fBm cross traffic. To this end, we prove a rigorous sample path envelope for fBm that complements previous approximate results. We find that both approaches agree in their outcome that overflow probabilities for fBm traffic have a Weibullian tail. We employ the sample path envelope and the concept of leftover service curves to model the remaining service after scheduling fBm cross traffic at a system. Using composition results for tandem systems from the stochastic network calculus we derive end-to-end statistical performance bounds for individual flows in networks under fBm cross traffic. We discover that these bounds grow in O(n (log n)^(1 (2-2H))) for n systems in series where H is the Hurst parameter of the fBm cross traffic. We show numerical results on the impact of the variability and the correlation of fBm traffic on network performance.
|
The construction of a sample path envelope for fBm traffic is, however, not straightforward since no simple overflow profile exists, see Sect. . A simplifying approach is proposed in @cite_33 where it is argued that the Weibull tail ) implies an envelope for fBm traffic. Such an envelope is, however, based on the approximation by the largest term ). A rigorous sample path envelope for fBm as well as end-to-end performance bounds under fBm cross traffic have not been derived.
|
{
"cite_N": [
"@cite_33"
],
"mid": [
"2022940918"
],
"abstract": [
"A network calculus is developed for processes whose burstiness is stochastically bounded by general decreasing functions. This calculus is useful for a large class of input processes, including important processes exhibiting \"subexponentially bounded burstiness\" such as fractional Brownian motion. Moreover, it allows judicious capture of the salient features of real-time traffic, such as the \"cell\" and \"burst\" characteristics of multiplexed traffic. This accurate characterization is achieved by setting the bounding function as a sum of exponentials."
]
}
|
0908.3957
|
2951370646
|
XML data warehouses form an interesting basis for decision-support applications that exploit heterogeneous data from multiple sources. However, XML-native database systems currently suffer from limited performances in terms of manageable data volume and response time for complex analytical queries. Fragmenting and distributing XML data warehouses (e.g., on data grids) allow to address both these issues. In this paper, we work on XML warehouse fragmentation. In relational data warehouses, several studies recommend the use of derived horizontal fragmentation. Hence, we propose to adapt it to the XML context. We particularly focus on the initial horizontal fragmentation of dimensions' XML documents and exploit two alternative algorithms. We experimentally validate our proposal and compare these alternatives with respect to a unified XML warehouse model we advocate for.
|
Pokorn 'y models a XML-star schema in XML by defining dimension hierarchies as sets of logically connected collections of XML data, and facts as XML data elements @cite_7 . H "u mmer propose a family of templates, named XCube, enabling the description of a multidimensional structure (dimension and fact data) for integrating several data warehouses into a virtual or federated warehouse @cite_20 . Rusu propose a methodology, based on the XQuery technology, for building XML data warehouses. This methodology covers processes such as data cleaning, summarization, intermediating XML documents, updating linking existing documents and creating fact tables @cite_4 . Facts and dimensions are represented by XML documents built with XQueries. Park introduce a framework for the multidimensional analysis of XML documents, named XML-OLAP @cite_14 . XML-OLAP is based on an XML warehouse where every fact and dimension is stored as an XML document. The proposed model features a single repository of XML documents for facts and multiple repositories of XML documents for dimensions (one repository per dimension). Eventually, Boussa " d propose an XML-based methodology, named X-Warehousing, for warehousing complex data @cite_6 . They use XML Schema as a modeling language to represent user analysis needs.
|
{
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_7",
"@cite_6",
"@cite_20"
],
"mid": [
"2750662274",
"1975337935",
"2120307394",
"1488872845",
""
],
"abstract": [
"Recently, a large number of XML documents are available on the Internet. This trend motivated many researchers to analyze them multi-dimensionally in the same way as relational data. In this paper, we propose a new framework for multidimensional analysis of XML documents, which we call XML-OLAP. We base XML-OLAP on XML warehouses where every fact data as well as dimension data are stored as XML documents. We build XML cubes from XML warehouses. We propose a new multidimensional expression language for XML cubes, which we call XML-MDX. XML-MDX statements target XML cubes and use XQuery expressions to designate the measure data. They specify text mining operators for aggregating text constituting the measure data. We evaluate XML-OLAP by applying it to a U.S. patent XML warehouse. We use XML-MDX queries, which demonstrate that XML-OLAP is effective for multi-dimensionally analyzing the U.S. patents.",
"Developing a data warehouse for XML documents involves two major processes: one of creating it, by processing XML raw documents into a specified data warehouse repository; and the other of querying it, by applying techniques to better answer users’ queries. This paper focuses on the first part; that is identifying a systematic approach for building a data warehouse of XML documents, specifically for transferring data from an underlying XML database into a defined XML data warehouse. The proposed methodology on building XML data warehouses covers processes including data cleaning and integration, summarization, intermediate XML documents, and updating linking existing documents and creating fact tables. In this paper, we also present a case study on how to put this methodology into practice. We utilise the XQuery technology in all of the above processes.",
"A large amount of heterogeneous information is now available in enterprises. Some their data sources are repositories of XML data or they are viewed as XML data independently on their inner implementation. In this paper, we study the foundations of XML data warehouses. We adapt the traditional star schema with explicit dimension hierarchies for XML environment. We propose the notion of XML-referential integrity for handling XML-dimension hierarchies. For querying XML data warehouses, we introduce a semijoin operation based on approximate matching XML data and discuss its effective evaluation.",
"XML is suitable for structuring complex data coming from different sources and supported by heterogeneous formats. It allows a flexible formalism capable to represent and store different types of data. Therefore, the importance of integrating XML documents in data warehouses is becoming increasingly high. In this paper, we propose an XML-based methodology, named X-Warehousing, which designs warehouses at a logical level, and populates them with XML documents at a physical level. Our approach is mainly oriented to users analysis objectives expressed according to an XML Schema and merged with XML data sources. The resulted XML Schema represents the logical model of a data warehouse. Whereas, XML documents validated against the analysis objectives populate the physical model of the data warehouse, called the XML cube.",
""
]
}
|
0908.3317
|
2953125487
|
We consider wireless networks in which multiple paths are available between each source and destination. We allow each source to split traffic among all of its available paths, and ask the question: how do we attain the lowest possible number of transmissions to support a given traffic matrix? Traffic bound in opposite directions over two wireless hops can utilize the reverse carpooling'' advantage of network coding in order to decrease the number of transmissions used. We call such coded hops as hyper-links''. With the reverse carpooling technique longer paths might be cheaper than shorter ones. However, there is a prisoners dilemma type situation among sources -- the network coding advantage is realized only if there is traffic in both directions of a shared path. We develop a two-level distributed control scheme that decouples user choices from each other by declaring a hyper-link capacity, allowing sources to split their traffic selfishly in a distributed fashion, and then changing the hyper-link capacity based on user actions. We show that such a controller is stable, and verify our analytical insights by simulation.
|
Network coding research was initiated by a seminal paper by Ahlswede @cite_7 and since then attracted a significant interest from the research community. Many initial works on the network coding technique focused on establishing connections between a fixed source and a set of terminal nodes. Li @cite_4 showed that the maximum rate of a multicast connection is equal to the minimum capacity of a cut that separates the source and any terminal. In a subsequent work, Koetter and M ' e dard @cite_2 developed an algebraic framework for network coding and investigated linear network codes for directed graphs with cycles. Network coding technique for wireless networks has been considered by Katabi @cite_1 . The proposed architecture, referred to as COPE, contains a special network coding layer between the IP and MAC layers. In @cite_13 Chachulski proposed an opportunistic routing protocol, referred to as MORE, that randomly mixes packets that belong to the same flow before forwarding them to the next hop. Sagduyu and Ephremides @cite_10 focused on the applications of network coding in simple path topologies (referred to in @cite_10 as networks) and formulate a related cross-layer optimization problems.
|
{
"cite_N": [
"@cite_4",
"@cite_7",
"@cite_10",
"@cite_1",
"@cite_2",
"@cite_13"
],
"mid": [
"2106403318",
"2105831729",
"",
"",
"2138928022",
"2127350146"
],
"abstract": [
"Consider a communication network in which certain source nodes multicast information to other nodes on the network in the multihop fashion where every node can pass on any of its received data to others. We are interested in how fast each node can receive the complete information, or equivalently, what the information rate arriving at each node is. Allowing a node to encode its received data before passing it on, the question involves optimization of the multicast mechanisms at the nodes. Among the simplest coding schemes is linear coding, which regards a block of data as a vector over a certain base field and allows a node to apply a linear transformation to a vector before passing it on. We formulate this multicast problem and prove that linear coding suffices to achieve the optimum, which is the max-flow from the source to each receiving node.",
"We introduce a new class of problems called network information flow which is inspired by computer network applications. Consider a point-to-point communication network on which a number of information sources are to be multicast to certain sets of destinations. We assume that the information sources are mutually independent. The problem is to characterize the admissible coding rate region. This model subsumes all previously studied models along the same line. We study the problem with one information source, and we have obtained a simple characterization of the admissible coding rate region. Our result can be regarded as the max-flow min-cut theorem for network information flow. Contrary to one's intuition, our work reveals that it is in general not optimal to regard the information to be multicast as a \"fluid\" which can simply be routed or replicated. Rather, by employing coding at the nodes, which we refer to as network coding, bandwidth can in general be saved. This finding may have significant impact on future design of switching systems.",
"",
"",
"We take a new look at the issue of network capacity. It is shown that network coding is an essential ingredient in achieving the capacity of a network. Building on recent work by (see Proc. 2001 IEEE Int. Symp. Information Theory, p.102), who examined the network capacity of multicast networks, we extend the network coding framework to arbitrary networks and robust networking. For networks which are restricted to using linear network codes, we find necessary and sufficient conditions for the feasibility of any given set of connections over a given network. We also consider the problem of network recovery for nonergodic link failures. For the multicast setup we prove that there exist coding strategies that provide maximally robust networks and that do not require adaptation of the network interior to the failure pattern in question. The results are derived for both delay-free networks and networks with delays.",
"Opportunistic routing is a recent technique that achieves high throughput in the face of lossy wireless links. The current opportunistic routing protocol, ExOR, ties the MAC with routing, imposing a strict schedule on routers' access to the medium. Although the scheduler delivers opportunistic gains, it misses some of the inherent features of the 802.11 MAC. For example, it prevents spatial reuse and thus may underutilize the wireless medium. It also eliminates the layering abstraction, making the protocol less amenable to extensions to alternate traffic types such as multicast. This paper presents MORE, a MAC-independent opportunistic routing protocol. MORE randomly mixes packets before forwarding them. This randomness ensures that routers that hear the same transmission do not forward the same packets. Thus, MORE needs no special scheduler to coordinate routers and can run directly on top of 802.11. Experimental results from a 20-node wireless testbed show that MORE's median unicast throughput is 22 higher than ExOR, and the gains rise to 45 over ExOR when there is a chance of spatial reuse. For multicast, MORE's gains increase with the number of destinations, and are 35-200 greater than ExOR."
]
}
|
0908.3090
|
2951963684
|
We propose a modeling framework for generating security protocol specifications. The generated protocol specifications rely on the use of a sequential and a semantical component. The first component defines protocol properties such as preconditions, effects, message sequences and it is developed as a WSDL-S specification. The second component defines the semantic aspects corresponding to the messages included in the first component by the use of ontological constructions and it is developed as an OWL-based specification. Our approach was validated on 13 protocols from which we mention: the ISO9798 protocol, the CCITTX.509 data transfer protocol and the Kerberos symmetric key protocol.
|
An approach that aims at the automatic implementation of security protocols is given in @cite_11 . This approach uses a formal description as a specification which is executed by participants. The proposed specification does not make use of Web service technologies, because of which inter-operability of systems executing the given specifications becomes a real issue. In addition, because our approach uses the ontology model, it benefits of several properties specific to ontologies, such as semantic properties, extendability or reusability of ontologies developed by others.
|
{
"cite_N": [
"@cite_11"
],
"mid": [
"2098496390"
],
"abstract": [
"We present an automatic implementation system of security protocols based in formal description techniques. A sufficiently complete and concise formal specification that has allowed us to define the state machine that corresponds to a security protocol has been designed to achieve our goals. This formal specification makes it possible to incorporate in a flexible way the security mechanisms and functions (random numbers generation, timestamps, symmetric-key encryption, public-key cryptography, etc). Our solution implies the incorporation of an additional security layer LEI (Logical Element of Implementation) in the TCP IP architecture. This additional layer be able both to interpret and to implement any security protocol from its formal specification. Our system provides an applications programming interface (API) for the development of distributed applications in the Internet like the e-commerce, bank transfers, network management or distribution information services that makes transparent to them the problem of security in the communications."
]
}
|
0908.3090
|
2951963684
|
We propose a modeling framework for generating security protocol specifications. The generated protocol specifications rely on the use of a sequential and a semantical component. The first component defines protocol properties such as preconditions, effects, message sequences and it is developed as a WSDL-S specification. The second component defines the semantic aspects corresponding to the messages included in the first component by the use of ontological constructions and it is developed as an OWL-based specification. Our approach was validated on 13 protocols from which we mention: the ISO9798 protocol, the CCITTX.509 data transfer protocol and the Kerberos symmetric key protocol.
|
The authors from @cite_12 propose a security ontology for resource annotation. The proposed ontology defines concepts for security and authorization, for cryptographic algorithms and for credentials. This proposal was designed to be used in the process of security protocol description and selection based on several criteria. In contrast, our ontologies, have a more detailed construction. For example, the ontology from @cite_12 defines a collection of cryptographic algorithms, however, it does not define the algorithm mode, which is a more implementation-specific information. In addition, we did not only propose an ontology, but also a set of rules to construct a specification.
|
{
"cite_N": [
"@cite_12"
],
"mid": [
"2121697735"
],
"abstract": [
"Annotation with security-related metadata enables discovery of resources that meet security requirements. This paper presents the NRL Security Ontology, which complements existing ontologies in other domains that focus on annotation of functional aspects of resources. Types of security information that could be described include mechanisms, protocols, objectives, algorithms, and credentials in various levels of detail and specificity. The NRL Security Ontology is more comprehensive and better organized than existing security ontologies. It is capable of representing more types of security statements and can be applied to any electronic resource. The class hierarchy of the ontology makes it both easy to use and intuitive to extend. We applied this ontology to a Service Oriented Architecture to annotate security aspects of Web service descriptions and queries. A refined matching algorithm was developed to perform requirement-capability matchmaking that takes into account not only the ontology concepts, but also the properties of the concepts."
]
}
|
0908.3090
|
2951963684
|
We propose a modeling framework for generating security protocol specifications. The generated protocol specifications rely on the use of a sequential and a semantical component. The first component defines protocol properties such as preconditions, effects, message sequences and it is developed as a WSDL-S specification. The second component defines the semantic aspects corresponding to the messages included in the first component by the use of ontological constructions and it is developed as an OWL-based specification. Our approach was validated on 13 protocols from which we mention: the ISO9798 protocol, the CCITTX.509 data transfer protocol and the Kerberos symmetric key protocol.
|
There have been several other security ontologies proposed @cite_1 @cite_2 . Because they do not relate to the specification of security protocols, they can not replace our proposal, but only complete it with additional concepts.
|
{
"cite_N": [
"@cite_1",
"@cite_2"
],
"mid": [
"2140941224",
"1975980841"
],
"abstract": [
"The use of ontologies for representing knowledge provides us with organization, communication and reusability. Information security is a serious requirement which must be carefully considered. Concepts and relations managed by any scientific community need to be formally defined and ontological engineering supports their definition. In this paper, the method of systematic review is applied with the purpose of identifying, extracting and analyzing the main proposals for security ontologies. The main identified proposals are compared using a formal framework and we conclude by stating their early state of development and the need of additional research efforts.",
"Information assurance, security, and privacy have moved from narrow topics of interest to information system designers to become critical issues of fundamental importance to society. This opens up new requirements and opportunities for novel approaches. Meeting this challenge requires to advance the theory and practice of security, privacy, and trust of Web-based applications and to provide declarative policy representation languages and ontologies together with algorithms to reason about policies. This paper summarizes an ontological approach to enhancing the Semantic Web with security."
]
}
|
0908.3644
|
2041282222
|
The random key graph is a random graph naturally associated with the random key predistribution scheme introduced by Eschenauer and Gligor in the context of wireless sensor networks (WSNs). For this class of random graphs, we establish a new version of a conjectured zero-one law for graph connectivity as the number of nodes becomes unboundedly large. The results reported here complement and strengthen recent work on this conjecture by Blackburn and Gerke. In particular, the results are given under conditions which are more realistic for applications to WSNs.
|
The zero-law in ) has recently been established independently by Blackburn and Gerke @cite_17 , and by Ya g an and Makowski @cite_20 . In both papers, it was shown that [ n K (n; (K_n,P_n) ) contains no isolated node = 0 ] whenever @math in ), a result which clearly implies the conjectured zero-law.
|
{
"cite_N": [
"@cite_20",
"@cite_17"
],
"mid": [
"2155003029",
"2063090892"
],
"abstract": [
"We consider the random graph induced by the random key predistribution scheme of Eschenauer and Gligor under the assumption of full visibility. We show the existence of a zero-one law for the absence of isolated nodes, and complement it by a Poisson convergence for the number of isolated nodes. Leveraging earlier results and analogies with Erdos-Renyi graphs, we explore similar results for the property of graph connectivity.",
"We study properties of the uniform random intersection graph model G(n,m,d). We find asymptotic estimates on the diameter of the largest connected component of the graph near the phase transition and connectivity thresholds. Moreover we manage to prove an asymptotically tight bound for the connectivity and phase transition thresholds for all possible ranges of d, which has not been obtained before. The main motivation of our research is the usage of the random intersection graph model in the studies of wireless sensor networks."
]
}
|
0908.3644
|
2041282222
|
The random key graph is a random graph naturally associated with the random key predistribution scheme introduced by Eschenauer and Gligor in the context of wireless sensor networks (WSNs). For this class of random graphs, we establish a new version of a conjectured zero-one law for graph connectivity as the number of nodes becomes unboundedly large. The results reported here complement and strengthen recent work on this conjecture by Blackburn and Gerke. In particular, the results are given under conditions which are more realistic for applications to WSNs.
|
Blackburn and Gerke @cite_17 also succeeded in generalizing the one-law result by Di in a number of directions: Under the additional conditions they showed [Thm. 5] BlackburnGerke that This result is weaker than the one-law in the conjecture )-). However, in the process of establishing ), they also show [Thm. 3] BlackburnGerke that the conjecture does hold in the special case @math for all @math without any constraint on the size of the key pools, say @math or @math . Specifically, the one-law in ) is shown to hold whenever the scaling is done according to with @math . As pointed out by these authors, it is now easy to conclude that the one-law in ) holds whenever @math and @math ; this corresponds to a constraint @math .
|
{
"cite_N": [
"@cite_17"
],
"mid": [
"2063090892"
],
"abstract": [
"We study properties of the uniform random intersection graph model G(n,m,d). We find asymptotic estimates on the diameter of the largest connected component of the graph near the phase transition and connectivity thresholds. Moreover we manage to prove an asymptotically tight bound for the connectivity and phase transition thresholds for all possible ranges of d, which has not been obtained before. The main motivation of our research is the usage of the random intersection graph model in the studies of wireless sensor networks."
]
}
|
0908.2440
|
1576311670
|
We consider the theoretical model of Crystalline robots, which have been introduced and prototyped by the robotics community. These robots consist of independently manipulable unit-square atoms that can extend contract arms on each side and attach detach from neighbors. These operations suffice to reconfigure between any two given (connected) shapes. The worstcase number of sequential moves required to transform one connected configuration to another is known to be ( n). However, in principle, atoms can all move simultaneously. We develop a parallel algorithm for reconfiguration that runs in only O(logn) parallel steps, although the total number of operations increases slightly to ( n logn). The result is the first (theoretically) almost-instantaneous universally reconfigurable robot built from simple units.
|
A @math -time sequential algorithm for reconfiguring general classes of lattice-based robotic models was given in @cite_3 , and it was also shown that this number of individual atom moves is sometimes required. The first reconfiguration algorithm specifically proposed for Crystalline robots was the algorithm @cite_12 . It reconfigures one shape to another by constructing a strip of modules as an intermediate step. This is done in @math steps. The algorithm is not in-place; the additional area used is @math , in terms of the unit-volume of a module. The algorithm @cite_9 and its variant @cite_15 are in-place, have a more parallelized nature, but still use @math steps. In fact, for each of these algorithms there exist simple instances that require @math steps.
|
{
"cite_N": [
"@cite_9",
"@cite_12",
"@cite_3",
"@cite_15"
],
"mid": [
"2056453745",
"1526777436",
"1999730797",
"2126295536"
],
"abstract": [
"Self-reconfigurable robots are versatile systems consisting of large numbers of independent modules. Effective use of these systems requires parallel actuation and planning, both for efficiency and independence from a central controller. In this paper we present the PacMan algorithm, a technique for distributed actuation and planning for systems with two- or three-dimensional unit-compressible modules. We give two versions of the algorithm along with correctness analysis. We also analyze the parallel actuation capability of the algorithm, showing that it will not deadlock and will avoid disconnecting the robot. We have implemented PacMan on the Crystal robot, a hardware system developed in our lab, and we present experiments and discuss the feasibility of large-scale implementation.",
"We discuss a robotic system composed of i>Crystalline modules. Crystalline modules can aggregate together to form distributed robot systems. Crystalline modules can move relative to each other by expanding and contracting. This actuation mechanism permits automated shape metamorphosis. We describe the Crystalline module concept and show the basic motions that enable a Crystalline robot system to self-reconfigure. We present an algorithm for general self-reconfiguration and describe simulation experiments.",
"In this article we examine the problem of dynamic self-reconfiguration of a class of modular robotic systems referred to as metamorphic systems. A metamorphic robotic system is a collection of mechatronic modules, each of which has the ability to connect, disconnect, and climb over adjacent modules. A change in the macroscopic morphology results from the locomotion of each module over its neighbors. Metamorphic systems can therefore be viewed as a large swarm of physically connected robotic modules that collectively act as a single entity. What distinguishes metamorphic systems from other reconfigurable robots is that they possess all of the following properties: (1) a large number of homogeneous modules; (2) a geometry such that modules fit within a regular lattice; (3) self-reconfigurability without outside help; (4) physical constraints which ensure contact between modules. In this article, the kinematic constraints governing metamorphic robot self-reconfiguration are addressed, and lower and upper bounds are established for the minimal number of moves needed to change such systems from any initial to any final specified configuration. These bounds are functions of initial and final configuration geometry and can be computed very quickly, while it appears that solving for the precise number of minimal moves cannot be done in polynomial time. It is then shown how the bounds developed here are useful in evaluating the performance of heuristic motion planning reconfiguration algorithms for metamorphic systems. © 1996 John Wiley & Sons, Inc.",
"We present a complete, local, and parallel reconfiguration algorithm for metamorphic robots made up of Telecubes, six degree of freedom cube shaped modules currently being developed at PARC. We show that by using 2 spl times 2 spl times 2 meta-modules we can achieve completeness of reconfiguration space using only local rules. Furthermore, this reconfiguration can be done in place and massively in parallel with many simultaneous module movements. Finally we present a loose quadratic upper bound on the total number of module movements required by the algorithm."
]
}
|
0908.2440
|
1576311670
|
We consider the theoretical model of Crystalline robots, which have been introduced and prototyped by the robotics community. These robots consist of independently manipulable unit-square atoms that can extend contract arms on each side and attach detach from neighbors. These operations suffice to reconfigure between any two given (connected) shapes. The worstcase number of sequential moves required to transform one connected configuration to another is known to be ( n). However, in principle, atoms can all move simultaneously. We develop a parallel algorithm for reconfiguration that runs in only O(logn) parallel steps, although the total number of operations increases slightly to ( n logn). The result is the first (theoretically) almost-instantaneous universally reconfigurable robot built from simple units.
|
A linear-time parallel algorithm for reconfiguring within the bounding box of source and target is given in @cite_5 . The total number of individual moves is also linear. No restrictions are made concerning physical properties of the robots. For example, @math strength is required, since modules can carry towers and push large masses during certain operations.
|
{
"cite_N": [
"@cite_5"
],
"mid": [
"2099361645"
],
"abstract": [
"In this paper we propose a novel algorithm that, given a source robot S and a target robot T, reconfigures S into T. Both S and T are robots composed of n atoms arranged in 2x2x2 meta-modules. The reconfiguration involves a total of O(n) atomic operations (expand, contract, attach, detach) and is performed in O(n) parallel steps. This improves on previous reconfiguration algorithms [D. Rus, M. Vona, Crystalline robots: Self-reconfiguration with compressible unit modules, Autonomous Robots 10 (1) (2001) 107-124; S. Vassilvitskii, M. Yim, J. Suh, A complete, local and parallel reconfiguration algorithm for cube style modular robots, in: Proc. of the IEEE Intl. Conf. on Robotics and Automation, 2002, pp. 117-122; Z. Butler, D. Rus, Distributed planning and control for modular robots with unit-compressible modules, Intl. J. Robotics Res. 22 (9) (2003) 699-715, doi:10.1177 02783649030229002], which require O(n^2) parallel steps. Our algorithm is in-place; that is, the reconfiguration takes place within the union of the bounding boxes of the source and target robots. We show that the algorithm can also be implemented in a synchronous, distributed fashion."
]
}
|
0908.2440
|
1576311670
|
We consider the theoretical model of Crystalline robots, which have been introduced and prototyped by the robotics community. These robots consist of independently manipulable unit-square atoms that can extend contract arms on each side and attach detach from neighbors. These operations suffice to reconfigure between any two given (connected) shapes. The worstcase number of sequential moves required to transform one connected configuration to another is known to be ( n). However, in principle, atoms can all move simultaneously. We develop a parallel algorithm for reconfiguration that runs in only O(logn) parallel steps, although the total number of operations increases slightly to ( n logn). The result is the first (theoretically) almost-instantaneous universally reconfigurable robot built from simple units.
|
However, it turns out that if the strength of each atom is restricted to @math , i.e. one atom can only pull or push a fixed number of other atoms, any Crystalline robot can still be reconfigured in-place, using @math parallel steps @cite_10 . Unavoidably, the total number of individual moves is @math . This is shown to be worst-case optimal.
|
{
"cite_N": [
"@cite_10"
],
"mid": [
"1487445608"
],
"abstract": [
"In this paper we propose novel algorithms for reconfiguring modular robots that are composed of n atoms. Each atom has the shape of a unit cube and can expand contract each face by half a unit, as well as attach to or detach from faces of neighboring atoms. For universal reconfiguration, atoms must be arranged in 2×2×2 modules. We respect certain physical constraints: each atom reaches at most unit velocity and (via expansion) can displace at most one other atom. We require that one of the atoms can store a map of the target configuration. Our algorithms involve a total of O(n 2) such atom operations, which are performed in O(n) parallel steps. This improves on previous reconfiguration algorithms, which either use O(n 2) parallel steps [8,10,4] or do not respect the constraints mentioned above [1]. In fact, in the setting considered, our algorithms are optimal, in the sense that certain reconfigurations require Ω(n) parallel steps. A further advantage of our algorithms is that reconfiguration can take place within the union of the source and target configurations."
]
}
|
0908.2440
|
1576311670
|
We consider the theoretical model of Crystalline robots, which have been introduced and prototyped by the robotics community. These robots consist of independently manipulable unit-square atoms that can extend contract arms on each side and attach detach from neighbors. These operations suffice to reconfigure between any two given (connected) shapes. The worstcase number of sequential moves required to transform one connected configuration to another is known to be ( n). However, in principle, atoms can all move simultaneously. We develop a parallel algorithm for reconfiguration that runs in only O(logn) parallel steps, although the total number of operations increases slightly to ( n logn). The result is the first (theoretically) almost-instantaneous universally reconfigurable robot built from simple units.
|
When @math strength is required but velocities are allowed to grow arbitrarily over time, reconfiguration takes @math steps @cite_11 . Note that this is for 2D robots and the third dimension is used as an intermediate.
|
{
"cite_N": [
"@cite_11"
],
"mid": [
"1534588514"
],
"abstract": [
"A self-reconfigurable (SR) robot is one composed of many small modules that autonomously act to change the shape and structure of the robot. In this paper we consider a general class of SR robot modules that have rectilinear shape that can be adjusted between fixed dimensions, can transmit forces to their neighbors, and can apply additional forces of unit maximum magnitude to their neighbors. We present a kinodynamically optimal algorithm for general reconfiguration between any two distinct, 2D connected configurations of n SR robot modules. The algorithm uses a third dimension as workspace during reconfiguration. This entire movement is achieved within O( √ n) movement time in the worst case, which is the asymptotically optimal time bound. The only prior reconfiguration algorithm achieving this time bound was restricted to linearly arrayed start and finish configurations (known as the “x-axis to y-axis problem”). All other prior work on SR robots assumed a constant velocity bound on module movement and so required at least time linear in n to do the reconfiguration."
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.