aid
stringlengths 9
15
| mid
stringlengths 7
10
| abstract
stringlengths 78
2.56k
| related_work
stringlengths 92
1.77k
| ref_abstract
dict |
---|---|---|---|---|
1010.4920
|
2129425110
|
We study the problem of channel pairing and power allocation in a multichannel multihop relay network to enhance the end-to-end data rate. Both amplify-and-forward (AF) and decode-and-forward (DF) relaying strategies are considered. Given fixed power allocation to the channels, we show that channel pairing over multiple hops can be decomposed into independent pairing problems at each relay, and a sorted-SNR channel pairing strategy is sum-rate optimal, where each relay pairs its incoming and outgoing channels by their SNR order. For the joint optimization of channel pairing and power allocation under both total and individual power constraints, we show that the problem can be decoupled into two subproblems solved separately. This separation principle is established by observing the equivalence between sorting SNRs and sorting channel gains in the jointly optimal solution. It significantly reduces the computational complexity in finding the jointly optimal solution. It follows that the channel pairing problem in joint optimization can be again decomposed into independent pairing problems at each relay based on sorted channel gains. The solution for optimizing power allocation for DF relaying is also provided, as well as an asymptotically optimal solution for AF relaying. Numerical results are provided to demonstrate substantial performance gain of the jointly optimal solution over some suboptimal alternatives. It is also observed that more gain is obtained from optimal channel pairing than optimal power allocation through judiciously exploiting the variation among multiple channels. Impact of the variation of channel gain, the number of channels, and the number of hops on the performance gain is also studied through numerical examples.
|
For an OFDM system as a typical example of multi-channel systems, the concept of CP was first introduced independently in @cite_21 and @cite_0 for a dual-hop AF relaying system where heuristic algorithms for pairing based on the order of channel quality were proposed. For relaying without the direct source-destination link available, @cite_21 used integer programming to find the optimal pairing that maximizes the sum SNR. From a system-design perspective, the sorted-SNR CP scheme was proposed in @cite_0 and was shown optimal for the noise-free relaying case, under the assumption of uniform power allocation.
|
{
"cite_N": [
"@cite_0",
"@cite_21"
],
"mid": [
"2119788079",
"2155429953"
],
"abstract": [
"Amplify-and-Forward (AF) is a simple but effective relaying concept for multihop networks that combines transparency regarding modulation format and coding scheme with ease of implementation. Conventional AF, however, does not take into account the transfer function of the first and the second hop channels. For OFDM based systems, this appears to be sub-optimum. In this paper an AF relaying scheme is proposed that adapts to the transfer functions of both channels. The relay estimates the transfer functions and rearranges the subcarriers in each OFDM packet such that an optimum coupling between subcarriers of the first and the second hop channels occurs. Additionally, a signaling scheme is developed that allows for an efficient transfer of the necessary information. Simulations show that the proposed relaying scheme achieves significant SNR gains over conventional OFDM relaying.",
"The paper considers a source-relay-destination link in which the relay node is allowed to reassign the input subchannels to different output subchannels. In particular, we consider a channel-aware relay node and compute the subchannel reassignment that optimizes a selected performance criterion. The numerical results suggest that optimized subchannel reassignment is particularly beneficial in frequency-selective channels and in channels where interference information is available at transmitter."
]
}
|
1010.4920
|
2129425110
|
We study the problem of channel pairing and power allocation in a multichannel multihop relay network to enhance the end-to-end data rate. Both amplify-and-forward (AF) and decode-and-forward (DF) relaying strategies are considered. Given fixed power allocation to the channels, we show that channel pairing over multiple hops can be decomposed into independent pairing problems at each relay, and a sorted-SNR channel pairing strategy is sum-rate optimal, where each relay pairs its incoming and outgoing channels by their SNR order. For the joint optimization of channel pairing and power allocation under both total and individual power constraints, we show that the problem can be decoupled into two subproblems solved separately. This separation principle is established by observing the equivalence between sorting SNRs and sorting channel gains in the jointly optimal solution. It significantly reduces the computational complexity in finding the jointly optimal solution. It follows that the channel pairing problem in joint optimization can be again decomposed into independent pairing problems at each relay based on sorted channel gains. The solution for optimizing power allocation for DF relaying is also provided, as well as an asymptotically optimal solution for AF relaying. Numerical results are provided to demonstrate substantial performance gain of the jointly optimal solution over some suboptimal alternatives. It is also observed that more gain is obtained from optimal channel pairing than optimal power allocation through judiciously exploiting the variation among multiple channels. Impact of the variation of channel gain, the number of channels, and the number of hops on the performance gain is also studied through numerical examples.
|
These works sparked interests for more research in this area. In the absence of the direct source-destination link, for the practical case of noisy-relay, by using the property of L-superadditivity of the rate function, the authors of @cite_20 proved that the sorted-SNR CP still remains optimal for sum-rate maximization in dual-hop AF relaying OFDM system. Subsequently, it was further proved in @cite_5 , through a different approach, that the sorted-SNR CP scheme is optimal for both AF and DF relaying in the same setup. When the direct source-destination link is available, @cite_3 presented two suboptimal CP schemes. For the same setup, a low complexity optimal CP scheme was later established in @cite_10 for dual-hop AF relaying, and the effect of direct path on the optimal pairing was characterized. In addition, it was shown in @cite_10 that, under certain conditions on relay power amplification, among all possible linear processing at the relay, the channel pairing is optimal.
|
{
"cite_N": [
"@cite_5",
"@cite_3",
"@cite_10",
"@cite_20"
],
"mid": [
"2113418259",
"1987847589",
"1713504836",
"2098255692"
],
"abstract": [
"We consider a two-hop relaying network in which orthogonal frequency division multiplexing (OFDM) is employed for the source-to-destination, the source-to-relay and the relay- to-destination links. Amplify-and-forward (AF) and decode-and- forward (DF) policies are both discussed with or without two-hop diversity, respectively, for the relaying network with a sum-power constraint. An unified approach is used for optimal power allocation in the four different relaying scenarios. First, equivalent channel gains are developed for any given subcarrier pair in each scenario, and then optimal power allocation can be obtained by applying the classic water-filling method. Moreover, we provide the proof to the optimality of sorted subcarrier pairing for AF and DF relaying without diversity, which, combined with optimal power allocation, can offer further performance gain.",
"In this paper, a two-hop amplify-and-forward cooperative diversity system using OFDM modulation was considered. We proposed an associated waterfllling power allocation scheme under separate power constraints at source and relay, which allocated source and relay power in an associated fashion. The scheme first allocated power to subcarriers uniformly, then gave an approximate equivalent channel gain model, and last employed the classic waterfilling algorithm to optimize the power allocation at relay and source, respectively. Moreover, we investigated the subcarrier pairing problem and presented a modified sorted subcarrier pairing algorithm, which considered the effect of source-destination link as well as source-relay link. The simulation results show that the associated waterfilling power allocation scheme and the modified sorted subcarrier pairing algorithm both could achieve higher average rate alone, and that the combination of them could improve the performance further.",
"In this paper, we consider the amplified-and-forward relaying in an OFDM system with unitary linear processing at the relay. We proposed a general analytical framework to find the unitary linear processing matrix that maximizes the system achievable rate. We show that the optimal processing matrix is a permutation matrix, which implies that a subcarrier pairing strategy is optimal. We further derived the optimal subcarrier pairing schemes for scenarios with and without the direct source-destination path for diversity. Simulation results are presented to demonstrate the achievable gain of optimal subcarrier pairing compared with non-optimal linear processing and non-pairing.",
"The paper studies subchannel assignment in a two-hop OFDM relay system in which the transmitting nodes (source and relay) have access to channel information and interference-related information. We show that with L-superadditive relay (performance) functions a simple ranking of subchannels leads to the optimal assignment with a very low computational complexity. Numerical results quantify the benefit of subchannel assignment in a frequency-selective channel."
]
}
|
1010.4920
|
2129425110
|
We study the problem of channel pairing and power allocation in a multichannel multihop relay network to enhance the end-to-end data rate. Both amplify-and-forward (AF) and decode-and-forward (DF) relaying strategies are considered. Given fixed power allocation to the channels, we show that channel pairing over multiple hops can be decomposed into independent pairing problems at each relay, and a sorted-SNR channel pairing strategy is sum-rate optimal, where each relay pairs its incoming and outgoing channels by their SNR order. For the joint optimization of channel pairing and power allocation under both total and individual power constraints, we show that the problem can be decoupled into two subproblems solved separately. This separation principle is established by observing the equivalence between sorting SNRs and sorting channel gains in the jointly optimal solution. It significantly reduces the computational complexity in finding the jointly optimal solution. It follows that the channel pairing problem in joint optimization can be again decomposed into independent pairing problems at each relay based on sorted channel gains. The solution for optimizing power allocation for DF relaying is also provided, as well as an asymptotically optimal solution for AF relaying. Numerical results are provided to demonstrate substantial performance gain of the jointly optimal solution over some suboptimal alternatives. It is also observed that more gain is obtained from optimal channel pairing than optimal power allocation through judiciously exploiting the variation among multiple channels. Impact of the variation of channel gain, the number of channels, and the number of hops on the performance gain is also studied through numerical examples.
|
The related problem of optimal PA for a dual-hop OFDM system was studied by many @cite_23 @cite_11 @cite_24 for different relay strategies and power constraints. The problem of jointly optimizing CP and PA was studied in a dual-hop OFDM system for AF and DF relaying in @cite_4 and @cite_18 , respectively, where the direct source-destination link was assumed available. The joint optimization problems were formulated as mixed integer programs and solved in the Lagrangian dual domain. Exact optimality under arbitrary number of channels was not established. Instead, by adopting the time-sharing argument @cite_22 in their systems, the proposed solutions were shown to be optimal in the limiting case as the number of channels approaches infinity.
|
{
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_22",
"@cite_24",
"@cite_23",
"@cite_11"
],
"mid": [
"1862689498",
"2171327552",
"2161272050",
"",
"2140865714",
""
],
"abstract": [
"In this paper, a point-to-point orthogonal-frequency- division multiplexing (OFDM) system with a decode-and- forward (DF) relay is considered. The transmission consists of two hops. The source transmits in the first hop, and the relay transmits in the second hop. Each hop occupies one time slot. The relay is half-duplex, and capable of decoding the message on a particular subcarrier in one time slot, and re-encoding and forwarding it on a different subcarrier in the next time slot. Thus, each message is transmitted on a pair of subcarriers in two hops. It is assumed that the destination is capable of combining the signals from the source and the relay pertaining to the same message. The goal is to maximize the weighted sum rate of the system by jointly optimizing subcarrier pairing and power allocation on each subcarrier in each hop. The weighting of the rates is to take into account the fact that different subcarriers may carry signals for different services. Both total and individual power constraints for the source and the relay are investigated. For the situations where the relay does not transmit on some subcarriers because doing so does not improve the weighted sum rate, we further allow the source to transmit new messages on these idle subcarriers. To the best of our knowledge, such a joint optimization inclusive of the destination combining has not been discussed in the literature. The problem is first formulated as a mixed integer programming problem. It is then transformed to a convex optimization problem by continuous relaxation, and solved in the dual domain. Based on the optimization results, algorithms to achieve feasible solutions are also proposed. Simulation results show that the proposed algorithms almost achieve the optimal weighted sum rate and outperform the existing methods in various channel conditions.",
"In this paper, we study the joint allocation of three types of resources, namely, power, subcarriers and relay nodes, in multi-relay assisted dual-hop cooperative OFDM systems. All the relays adopt the amplify-and-forward protocol and assist the transmission from the source to destination simultaneously but on orthogonal subcarriers. The objective is to maximize the system transmission rate subject to individual power constraints on each node or a total network power constraint. We formulate such a problem as a subcarrier-pair based resource allocation that seeks the joint optimization of subcarrier pairing, subcarrier-pair-to-relay assignment, and power allocation. Using a dual approach, we solve this problem efficiently in an asymptotically optimal manner. Specifically, for the optimization problem with individual power constraints, the computational complexity is polynomial in the number of subcarriers and relay nodes, whereas the complexity of the problem with a total power constraint is polynomial in the number of subcarriers.We further propose two suboptimal algorithms for the former to trade off performance for complexity. Simulation studies are conducted to evaluate the average transmission rate and outage probability of the proposed algorithms. The impact of relay location is also discussed.",
"The design and optimization of multicarrier communications systems often involve a maximization of the total throughput subject to system resource constraints. The optimization problem is numerically difficult to solve when the problem does not have a convexity structure. This paper makes progress toward solving optimization problems of this type by showing that under a certain condition called the time-sharing condition, the duality gap of the optimization problem is always zero, regardless of the convexity of the objective function. Further, we show that the time-sharing condition is satisfied for practical multiuser spectrum optimization problems in multicarrier systems in the limit as the number of carriers goes to infinity. This result leads to efficient numerical algorithms that solve the nonconvex problem in the dual domain. We show that the recently proposed optimal spectrum balancing algorithm for digital subscriber lines can be interpreted as a dual algorithm. This new interpretation gives rise to more efficient dual update methods. It also suggests ways in which the dual objective may be evaluated approximately, further improving the numerical efficiency of the algorithm. We propose a low-complexity iterative spectrum balancing algorithm based on these ideas, and show that the new algorithm achieves near-optimal performance in many practical situations",
"",
"We consider OFDM (orthogonal frequency division multiplexing) transmission helped by a relay. Symbols sent by the source may or may not be retransmitted by a relay during a second time slot. The relay is supposed to operate in Decode-and-Forward (DF) mode. For each carrier the destination implements maximum ratio combining. Assuming perfect CSI (channel state information) knowledge the paper investigates the power allocation problem for rate maximization of the scheme. Both cases of a sum power constraint, and of individual power constraints at the source and at the relay are tackled. The theoretical analysis is illustrated by numerical results for both types of constraints.",
""
]
}
|
1010.4920
|
2129425110
|
We study the problem of channel pairing and power allocation in a multichannel multihop relay network to enhance the end-to-end data rate. Both amplify-and-forward (AF) and decode-and-forward (DF) relaying strategies are considered. Given fixed power allocation to the channels, we show that channel pairing over multiple hops can be decomposed into independent pairing problems at each relay, and a sorted-SNR channel pairing strategy is sum-rate optimal, where each relay pairs its incoming and outgoing channels by their SNR order. For the joint optimization of channel pairing and power allocation under both total and individual power constraints, we show that the problem can be decoupled into two subproblems solved separately. This separation principle is established by observing the equivalence between sorting SNRs and sorting channel gains in the jointly optimal solution. It significantly reduces the computational complexity in finding the jointly optimal solution. It follows that the channel pairing problem in joint optimization can be again decomposed into independent pairing problems at each relay based on sorted channel gains. The solution for optimizing power allocation for DF relaying is also provided, as well as an asymptotically optimal solution for AF relaying. Numerical results are provided to demonstrate substantial performance gain of the jointly optimal solution over some suboptimal alternatives. It is also observed that more gain is obtained from optimal channel pairing than optimal power allocation through judiciously exploiting the variation among multiple channels. Impact of the variation of channel gain, the number of channels, and the number of hops on the performance gain is also studied through numerical examples.
|
Without the direct source-destination link, jointly optimizing CP and PA for DF relaying in a dual-hop OFDM system was investigated in @cite_6 and @cite_19 , where @cite_6 assumed a total power constraint shared between the source and the relay, and @cite_19 considered individual power constraints separately imposed on the source and the relay. In both cases, two-step separate CP and PA schemes were proposed and then proved to achieve the jointly optimal solution. For this dual-hop setup, it was shown that the optimal CP scheme is the one that maps the channels solely based on their channel gains independent of the optimal PA solution.
|
{
"cite_N": [
"@cite_19",
"@cite_6"
],
"mid": [
"2152042368",
"1967453062"
],
"abstract": [
"The combination of a multihop relay system and orthogonal frequency-division multiplexing (OFDM) modulation is a promising way to increase the capacity and coverage area. For the OFDM two-hop relay system with separate power constraints, joint subcarrier matching and power allocation is considered in this paper, which uses the ldquodecode-and-forwardrdquo relay strategy. The aforementioned problem can be formulated as a mixed binary integer programming problem, which is prohibitive when trying to find the global optimum. By separating the subcarrier matching and the power allocation, the optimal scheme, i.e., the optimal joint subcarrier matching and power allocation, is presented in this paper. After that, a suboptimal scheme with less complexity is also proposed, which can also be used to better understand the effects of power allocation. Simulation results show that the capacity of the optimal scheme is almost equivalent to the upper bound of the system capacity, and the capacity of the suboptimal scheme is close to that of the optimal scheme. In addition, simulation results also show that the one-to-one subcarrier matching is almost optimal, although it simplifies the system architecture.",
"Orthogonal frequency division multiplexing (OFDM) multihop system is a promising way to increase capacity and coverage. In this paper, we propose an optimally joint subcarrier matching and power allocation scheme to further maximize the total channel capacity with the constrained total system power. First, the problem is formulated as a mixed binary integer programming problem, which is prohibitive to find the global optimum in terms of complexity. Second, by making use of the equivalent channel power gain for any matched subcarrier pair, a low-complexity scheme is proposed. The optimal subcarrier matching is to match subcarriers by the order of the channel power gains. The optimal power allocation among the matched subcarrier pairs is water-filling. An analytical argument is given to prove that the two steps achieve the optimally joint subcarrier matching and power allocation. The simulation results show that the proposed scheme achieves the largest total channel capacity as compared to the other schemes, where there is no subcarrier matching or power allocation."
]
}
|
1010.4920
|
2129425110
|
We study the problem of channel pairing and power allocation in a multichannel multihop relay network to enhance the end-to-end data rate. Both amplify-and-forward (AF) and decode-and-forward (DF) relaying strategies are considered. Given fixed power allocation to the channels, we show that channel pairing over multiple hops can be decomposed into independent pairing problems at each relay, and a sorted-SNR channel pairing strategy is sum-rate optimal, where each relay pairs its incoming and outgoing channels by their SNR order. For the joint optimization of channel pairing and power allocation under both total and individual power constraints, we show that the problem can be decoupled into two subproblems solved separately. This separation principle is established by observing the equivalence between sorting SNRs and sorting channel gains in the jointly optimal solution. It significantly reduces the computational complexity in finding the jointly optimal solution. It follows that the channel pairing problem in joint optimization can be again decomposed into independent pairing problems at each relay based on sorted channel gains. The solution for optimizing power allocation for DF relaying is also provided, as well as an asymptotically optimal solution for AF relaying. Numerical results are provided to demonstrate substantial performance gain of the jointly optimal solution over some suboptimal alternatives. It is also observed that more gain is obtained from optimal channel pairing than optimal power allocation through judiciously exploiting the variation among multiple channels. Impact of the variation of channel gain, the number of channels, and the number of hops on the performance gain is also studied through numerical examples.
|
Similar studies on the problem of CP and PA in relaying or relaying have been scarce. The authors of @cite_12 proposed an adaptive PA algorithm to maximize the end-to-end rate under the total power constraint in a multi-hop OFDM relaying system. For a similar network with DF relaying, @cite_9 studied the problem of joint power and time allocation under the long-term total power constraint to maximize the end-to-end rate. Furthermore, in @cite_12 , the idea of using CP to further enhance the performance was mentioned in addition to PA. However, no claim was provided on the optimality of the pairing scheme under the influence of PA. The optimal joint CP and PA solution remained unknown.
|
{
"cite_N": [
"@cite_9",
"@cite_12"
],
"mid": [
"2157992276",
"2539016312"
],
"abstract": [
"We study the end-to-end resource allocation in an OFDM based multi-hop network consisting of a one-dimensional chain of nodes including a source, a destination, and multiple relays. The problem is to maximize the end-to-end average transmission rate under a long-term total power constraint by adapting the transmission power on each subcarrier over each hop and the transmission time used by each hop in every time frame. The solution to the problem is derived by decomposing it into two subproblems: short-term time and power allocation given an arbitrary total power constraint for each channel realization, and total power distribution over all channel realizations. We show that the optimal solution has the following features: the power allocation on subcarriers over each hop has the water-filling structure and a higher water level is given to the hop with relatively poor channel condition; meanwhile, the fraction of transmission time allocated to each hop is adjusted to keep the instantaneous rates over all hops equal. To tradeoff between performance, computational complexity and signalling overhead, three suboptimal resource allocation algorithms are also proposed. Numerical results are illustrated under different network settings and channel environments.",
"We consider multi-hop OFDM relaying systems employing either decode-and-forward (DF) or amplify-and-forward (AF) relaying protocols in this paper. We propose adaptive power allocation (PA) algorithms under joint transmit power constraint at source and relays in order to maximize system capacity with channel state information (CSI) known at all nodes. The paring of subcarriers is also discussed. Simulation results show that the proposed adaptive PA algorithms achieve higher system capacity than uniform PA algorithms. If paring techniques are employed, system capacity can be further enhanced."
]
}
|
1010.4018
|
1588892871
|
In this manuscript, we consider the problems of channel assignment in wireless networks and data migration in heterogeneous storage systems. We show that a soft edge coloring approach to both problems gives rigorous approximation guarantees. In the channel assignment problem arising in wireless networks a pair of edges incident to a vertex are said to be conflicting if the channels assigned to them are the same. Our goal is to assign channels (color edges) so that the number of conflicts is minimized. The problem is NP-hard by a reduction from Edge coloring and we present two combinatorial algorithms for this case. The first algorithm is based on a distributed greedy method and gives a solution with at most @math more conflicts than the optimal solution.The approximation ratio if the second algorithm is @math , which gives a ( @math )-factor for dense graphs and is the best possible unless P = NP. We also consider the data migration problem in heterogeneous storage systems. In such systems, data layouts may need to be reconfigured over time for load balancing or in the event of system failure upgrades. It is critical to migrate data to their target locations as quickly as possible to obtain the best performance of the system. Most of the previous results on data migration assume that each storage node can perform only one data transfer at a time. However, storage devices tend to have heterogeneous capabilities as devices may be added over time due to storage demand increase. We develop algorithms to minimize the data migration time. We show that it is possible to find an optimal migration schedule when all @math 's are even. Furthermore, though the problem is NP-hard in general, we give an efficient soft edge coloring algorithm that offers a rigorous @math -approximation guarantee.
|
Fitzpatrick and Meertens @cite_13 have considered a variant of graph coloring problem (called the Soft graph coloring problem) where the objective is to develop a distributed algorithm for coloring vertices so that the number of conflicts is minimized. The algorithm repeatedly recolors vertices to quickly reduce the conflicts to an acceptable level. They have studied experimental performance for regular graphs but no theoretical analysis has been provided. Damaschke @cite_6 presented a distributed soft coloring algorithm for special cases such as paths and grids, and provided the analysis on the number of conflicts as a function of time @math . In particular, the conflict density on the path is given as @math when two colors are used, where the conflict density is the number of conflicts divided by @math .
|
{
"cite_N": [
"@cite_13",
"@cite_6"
],
"mid": [
"1485830228",
"2129185967"
],
"abstract": [
"This paper reports on a simple, decentralized, anytime, stochastic, soft graph-colouring algorithm. The algorithm is designed to quickly reduce the number of colour conflicts in large, sparse graphs in a scalable, robust, low-cost manner. The algorithm is experimentally evaluated in a framework motivated by its application to resource coordination in large, distributed networks.",
"In a soft coloring of a graph, a few adjacent vertices may receive the same color. We study soft coloring in the distributed model where vertices are processing units and edges are communication links. We aim at reducing coloring conflicts as quickly as possible over time by recoloring. We propose a randomized algorithm for 2-coloring the path with optimal decrease rate. Conflicts can be reduced exponentially faster if extra colors are allowed. We generalize the results to a broader class of locally checkable labeling problems on enhanced paths. A single result for grid coloring is also presented."
]
}
|
1010.3132
|
2949171286
|
We develop sub-Nyquist sampling systems for analog signals comprised of several, possibly overlapping, finite duration pulses with unknown shapes and time positions. Efficient sampling schemes when either the pulse shape or the locations of the pulses are known have been previously developed. To the best of our knowledge, stable and low-rate sampling strategies for continuous signals that are superpositions of unknown pulses without knowledge of the pulse locations have not been derived. The goal in this paper is to fill this gap. We propose a multichannel scheme based on Gabor frames that exploits the sparsity of signals in time and enables sampling multipulse signals at sub-Nyquist rates. Moreover, if the signal is additionally essentially multiband, then the sampling scheme can be adapted to lower the sampling rate without knowing the band locations. We show that, with proper preprocessing, the necessary Gabor coefficients, can be recovered from the samples using standard methods of compressed sensing. In addition, we provide error estimates on the reconstruction and analyze the proposed architecture in the presence of noise.
|
Recently, the ideas of compressed sensing have been extended to allow for sub-Nyquist sampling of analog signals @cite_36 , @cite_12 , @cite_23 , @cite_17 , @cite_11 , @cite_35 , @cite_10 , @cite_29 . These works follow the Xampling paradigm, which provides a framework for incorporating and exploiting structure in analog signals without the need for discritization @cite_2 , @cite_8 . Two of these sub-Nyquist solutions are closely related to our scheme: the first is a sub-Nyquist sampling architecture for multiband signals introduced in @cite_17 , while the second is a sampling system for multipulse signals with known pulse shape introduced in @cite_11 . We now briefly comment on the connection of our results to these works. The observations made here will be expanded in the follow up paper, in which we generalize our sampling scheme by certain mixing of the channels. We will show that by proper mixing of the channels, we can sample efficiently both multipulse signals with known pulse shape, and time limited signals that are essentially multiband, connecting our results more explicitly to prior sampling architectures.
|
{
"cite_N": [
"@cite_35",
"@cite_8",
"@cite_36",
"@cite_29",
"@cite_17",
"@cite_23",
"@cite_2",
"@cite_10",
"@cite_12",
"@cite_11"
],
"mid": [
"2138787877",
"1993784401",
"",
"",
"2123629701",
"2147276092",
"1980300788",
"2102701524",
"",
"2074380172"
],
"abstract": [
"Signals comprised of a stream of short pulses appear in many applications including bioimaging and radar. The recent finite rate of innovation framework, has paved the way to low rate sampling of such pulses by noticing that only a small number of parameters per unit time are needed to fully describe these signals. Unfortunately, for high rates of innovation, existing sampling schemes are numerically unstable. In this paper we propose a general sampling approach which leads to stable recovery even in the presence of many pulses. We begin by deriving a condition on the sampling kernel which allows perfect reconstruction of periodic streams from the minimal number of samples. We then design a compactly supported class of filters, satisfying this condition. The periodic solution is extended to finite and infinite streams and is shown to be numerically stable even for a large number of pulses. High noise robustness is also demonstrated when the delays are sufficiently separated. Finally, we process ultrasound imaging data using our techniques and show that substantial rate reduction with respect to traditional ultrasound sampling schemes can be achieved.",
"The authors present a sub-Nyquist analog-to-digital converter of wideband inputs. The circuit realises the recently proposed modulated wideband converter, which is a flexible platform for sampling signals according to their actual bandwidth occupation. The theoretical work enables, for example, a sub-Nyquist wideband communication receiver, which has no prior information on the transmitter carrier positions. The present design supports input signals with 2 GHz Nyquist rate and 120 MHz spectrum occupancy, with arbitrary transmission frequencies. The sampling rate is as low as 280 MHz. To the best of the authors' knowledge, this is the first reported hardware that performs sub-Nyquist sampling and reconstruction of wideband signals. The authors describe the various circuit design considerations, with an emphasis on the non-ordinary challenges the converter introduces: mixing a signal with a multiple set of sinusoids, rather than a single local oscillator, and generation of highly transient periodic waveforms, with transient intervals on the order of the Nyquist rate. Hardware experiments validate the design and demonstrate sub-Nyquist sampling and signal reconstruction.",
"",
"",
"Conventional sub-Nyquist sampling methods for analog signals exploit prior information about the spectral support. In this paper, we consider the challenging problem of blind sub-Nyquist sampling of multiband signals, whose unknown frequency support occupies only a small portion of a wide spectrum. Our primary design goals are efficient hardware implementation and low computational load on the supporting digital processing. We propose a system, named the modulated wideband converter, which first multiplies the analog signal by a bank of periodic waveforms. The product is then low-pass filtered and sampled uniformly at a low rate, which is orders of magnitude smaller than Nyquist. Perfect recovery from the proposed samples is achieved under certain necessary and sufficient conditions. We also develop a digital architecture, which allows either reconstruction of the analog input, or processing of any band of interest at a low rate, that is, without interpolating to the high Nyquist rate. Numerical simulations demonstrate many engineering aspects: robustness to noise and mismodeling, potential hardware simplifications, real-time performance for signals with time-varying support and stability to quantization effects. We compare our system with two previous approaches: periodic nonuniform sampling, which is bandwidth limited by existing hardware devices, and the random demodulator, which is restricted to discrete multitone signals and has a high computational load. In the broader context of Nyquist sampling, our scheme has the potential to break through the bandwidth barrier of state-of-the-art analog conversion technologies such as interleaved converters.",
"Traditional sampling theories consider the problem of reconstructing an unknown signal x from a series of samples. A prevalent assumption which often guarantees recovery from the given measurements is that x lies in a known subspace. Recently, there has been growing interest in nonlinear but structured signal models, in which x lies in a union of subspaces. In this paper, we develop a general framework for robust and efficient recovery of such signals from a given set of samples. More specifically, we treat the case in which x lies in a sum of k subspaces, chosen from a larger set of m possibilities. The samples are modeled as inner products with an arbitrary set of sampling functions. To derive an efficient and robust recovery algorithm, we show that our problem can be formulated as that of recovering a block-sparse vector whose nonzero elements appear in fixed blocks. We then propose a mixed lscr2 lscr1 program for block sparse recovery. Our main result is an equivalence condition under which the proposed convex algorithm is guaranteed to recover the original signal. This result relies on the notion of block restricted isometry property (RIP), which is a generalization of the standard RIP used extensively in the context of compressed sensing. Based on RIP, we also prove stability of our approach in the presence of noise and modeling errors. A special case of our framework is that of recovering multiple measurement vectors (MMV) that share a joint sparsity pattern. Adapting our results to this context leads to new MMV recovery methods as well as equivalence conditions under which the entire set can be determined efficiently.",
"We introduce Xampling, a unified framework for signal acquisition and processing of signals in a union of subspaces. The main functions of this framework are two: Analog compression that narrows down the input bandwidth prior to sampling with commercial devices followed by a nonlinear algorithm that detects the input subspace prior to conventional signal processing. A representative union model of spectrally sparse signals serves as a test-case to study these Xampling functions. We adopt three metrics for the choice of analog compression: robustness to model mismatch, required hardware accuracy, and software complexities. We conduct a comprehensive comparison between two sub-Nyquist acquisition strategies for spectrally sparse signals, the random demodulator and the modulated wideband converter (MWC), in terms of these metrics and draw operative conclusions regarding the choice of analog compression. We then address lowrate signal processing and develop an algorithm for that purpose that enables convenient signal processing at sub-Nyquist rates from samples obtained by the MWC. We conclude by showing that a variety of other sampling approaches for different union classes fit nicely into our framework.",
"Time-delay estimation arises in many applications in which a multipath medium has to be identified from pulses transmitted through the channel. Previous methods for time delay recovery either operate on the analog received signal, or require sampling at the Nyquist rate of the transmitted pulse. In this paper, we develop a unified approach to time delay estimation from low-rate samples. This problem can be formulated in the broader context of sampling over an infinite union of subspaces. Although sampling over unions of subspaces has been receiving growing interest, previous results either focus on unions of finite-dimensional subspaces, or finite unions. The framework we develop here leads to perfect recovery of the multipath delays from samples of the channel output at the lowest possible rate, even in the presence of overlapping transmitted pulses, and allows for a variety of different sampling methods. The sampling rate depends only on the number of multipath components and the transmission rate, but not on the bandwidth of the probing signal. This result can be viewed as a sampling theorem over an infinite union of infinite dimensional subspaces. By properly manipulating the low-rate samples, we show that the time delays can be recovered using the well-known ESPRIT algorithm. Combining results from sampling theory with those obtained in the context of direction of arrival estimation, we develop sufficient conditions on the transmitted pulse and the sampling functions in order to ensure perfect recovery of the channel parameters at the minimal possible rate.",
"",
"We consider minimal-rate sampling schemes for infinite streams of delayed and weighted versions of a known pulse shape. The minimal sampling rate for these parametric signals is referred to as the rate of innovation and is equal to the number of degrees of freedom per unit time. Although sampling of infinite pulse streams was treated in previous works, either the rate of innovation was not achieved, or the pulse shape was limited to Diracs. In this paper we propose a multichannel architecture for sampling pulse streams with arbitrary shape, operating at the rate of innovation. Our approach is based on modulating the input signal with a set of properly chosen waveforms, followed by a bank of integrators. This architecture is motivated by recent work on sub-Nyquist sampling of multiband signals. We show that the pulse stream can be recovered from the proposed minimal-rate samples using standard tools taken from spectral estimation in a stable way even at high rates of innovation. In addition, we address practical implementation issues, such as reduction of hardware complexity and immunity to failure in the sampling channels. The resulting scheme is flexible and exhibits better noise robustness than previous approaches."
]
}
|
1010.3132
|
2949171286
|
We develop sub-Nyquist sampling systems for analog signals comprised of several, possibly overlapping, finite duration pulses with unknown shapes and time positions. Efficient sampling schemes when either the pulse shape or the locations of the pulses are known have been previously developed. To the best of our knowledge, stable and low-rate sampling strategies for continuous signals that are superpositions of unknown pulses without knowledge of the pulse locations have not been derived. The goal in this paper is to fill this gap. We propose a multichannel scheme based on Gabor frames that exploits the sparsity of signals in time and enables sampling multipulse signals at sub-Nyquist rates. Moreover, if the signal is additionally essentially multiband, then the sampling scheme can be adapted to lower the sampling rate without knowing the band locations. We show that, with proper preprocessing, the necessary Gabor coefficients, can be recovered from the samples using standard methods of compressed sensing. In addition, we provide error estimates on the reconstruction and analyze the proposed architecture in the presence of noise.
|
The concept of using modulation waveforms is based on ideas presented in @cite_17 for a multiband model, which is Fourier dual to ours: the signals in @cite_17 are assumed to be sparse in frequency, while multipulse signals are sparse in time. More specifically, @cite_17 considers multiband signals whose Fourier transform is concentrated on @math frequency bands, and the width of each band is no greater than @math . The locations of the bands are unknown in advance. A low rate sampling scheme, called the modulated wideband converter (MWC), allowing recovery of such signals at the rate of @math was proposed in @cite_17 ; a hardware prototype appears in @cite_8 . This scheme consists of parallel channels where in each channel the input is modulated with a periodic waveform followed by a low-pass filter and low-rate uniform sampling. The main idea is that in each channel the spectrum of the signal is scrambled, such that a portion of the energy of all bands appears at baseband. Therefore, the input to the sampler contains a mixture of all the bands. Mixing of the frequency bands in @cite_17 is analogous to mixing the Gabor coefficients in our scheme.
|
{
"cite_N": [
"@cite_8",
"@cite_17"
],
"mid": [
"1993784401",
"2123629701"
],
"abstract": [
"The authors present a sub-Nyquist analog-to-digital converter of wideband inputs. The circuit realises the recently proposed modulated wideband converter, which is a flexible platform for sampling signals according to their actual bandwidth occupation. The theoretical work enables, for example, a sub-Nyquist wideband communication receiver, which has no prior information on the transmitter carrier positions. The present design supports input signals with 2 GHz Nyquist rate and 120 MHz spectrum occupancy, with arbitrary transmission frequencies. The sampling rate is as low as 280 MHz. To the best of the authors' knowledge, this is the first reported hardware that performs sub-Nyquist sampling and reconstruction of wideband signals. The authors describe the various circuit design considerations, with an emphasis on the non-ordinary challenges the converter introduces: mixing a signal with a multiple set of sinusoids, rather than a single local oscillator, and generation of highly transient periodic waveforms, with transient intervals on the order of the Nyquist rate. Hardware experiments validate the design and demonstrate sub-Nyquist sampling and signal reconstruction.",
"Conventional sub-Nyquist sampling methods for analog signals exploit prior information about the spectral support. In this paper, we consider the challenging problem of blind sub-Nyquist sampling of multiband signals, whose unknown frequency support occupies only a small portion of a wide spectrum. Our primary design goals are efficient hardware implementation and low computational load on the supporting digital processing. We propose a system, named the modulated wideband converter, which first multiplies the analog signal by a bank of periodic waveforms. The product is then low-pass filtered and sampled uniformly at a low rate, which is orders of magnitude smaller than Nyquist. Perfect recovery from the proposed samples is achieved under certain necessary and sufficient conditions. We also develop a digital architecture, which allows either reconstruction of the analog input, or processing of any band of interest at a low rate, that is, without interpolating to the high Nyquist rate. Numerical simulations demonstrate many engineering aspects: robustness to noise and mismodeling, potential hardware simplifications, real-time performance for signals with time-varying support and stability to quantization effects. We compare our system with two previous approaches: periodic nonuniform sampling, which is bandwidth limited by existing hardware devices, and the random demodulator, which is restricted to discrete multitone signals and has a high computational load. In the broader context of Nyquist sampling, our scheme has the potential to break through the bandwidth barrier of state-of-the-art analog conversion technologies such as interleaved converters."
]
}
|
1010.3132
|
2949171286
|
We develop sub-Nyquist sampling systems for analog signals comprised of several, possibly overlapping, finite duration pulses with unknown shapes and time positions. Efficient sampling schemes when either the pulse shape or the locations of the pulses are known have been previously developed. To the best of our knowledge, stable and low-rate sampling strategies for continuous signals that are superpositions of unknown pulses without knowledge of the pulse locations have not been derived. The goal in this paper is to fill this gap. We propose a multichannel scheme based on Gabor frames that exploits the sparsity of signals in time and enables sampling multipulse signals at sub-Nyquist rates. Moreover, if the signal is additionally essentially multiband, then the sampling scheme can be adapted to lower the sampling rate without knowing the band locations. We show that, with proper preprocessing, the necessary Gabor coefficients, can be recovered from the samples using standard methods of compressed sensing. In addition, we provide error estimates on the reconstruction and analyze the proposed architecture in the presence of noise.
|
Despite the similarity, there are some important differences between the systems. First, in the mixing stage we use time-limited and non-periodic waveforms, while the MWC relies on periodic functions. Second, following the mixing stage, we use an integrator in contrast to the low-pass filter in @cite_17 . These differences are due to the fact that we are interested in different quantities: content of the signal on time intervals in our work as opposed to frequency bands in @cite_17 . However, in both systems the mixing is used for the same purpose: to reduce the sampling rate relative to the Nyquist rate.
|
{
"cite_N": [
"@cite_17"
],
"mid": [
"2123629701"
],
"abstract": [
"Conventional sub-Nyquist sampling methods for analog signals exploit prior information about the spectral support. In this paper, we consider the challenging problem of blind sub-Nyquist sampling of multiband signals, whose unknown frequency support occupies only a small portion of a wide spectrum. Our primary design goals are efficient hardware implementation and low computational load on the supporting digital processing. We propose a system, named the modulated wideband converter, which first multiplies the analog signal by a bank of periodic waveforms. The product is then low-pass filtered and sampled uniformly at a low rate, which is orders of magnitude smaller than Nyquist. Perfect recovery from the proposed samples is achieved under certain necessary and sufficient conditions. We also develop a digital architecture, which allows either reconstruction of the analog input, or processing of any band of interest at a low rate, that is, without interpolating to the high Nyquist rate. Numerical simulations demonstrate many engineering aspects: robustness to noise and mismodeling, potential hardware simplifications, real-time performance for signals with time-varying support and stability to quantization effects. We compare our system with two previous approaches: periodic nonuniform sampling, which is bandwidth limited by existing hardware devices, and the random demodulator, which is restricted to discrete multitone signals and has a high computational load. In the broader context of Nyquist sampling, our scheme has the potential to break through the bandwidth barrier of state-of-the-art analog conversion technologies such as interleaved converters."
]
}
|
1010.3132
|
2949171286
|
We develop sub-Nyquist sampling systems for analog signals comprised of several, possibly overlapping, finite duration pulses with unknown shapes and time positions. Efficient sampling schemes when either the pulse shape or the locations of the pulses are known have been previously developed. To the best of our knowledge, stable and low-rate sampling strategies for continuous signals that are superpositions of unknown pulses without knowledge of the pulse locations have not been derived. The goal in this paper is to fill this gap. We propose a multichannel scheme based on Gabor frames that exploits the sparsity of signals in time and enables sampling multipulse signals at sub-Nyquist rates. Moreover, if the signal is additionally essentially multiband, then the sampling scheme can be adapted to lower the sampling rate without knowing the band locations. We show that, with proper preprocessing, the necessary Gabor coefficients, can be recovered from the samples using standard methods of compressed sensing. In addition, we provide error estimates on the reconstruction and analyze the proposed architecture in the presence of noise.
|
Another related signal model is that of multipulse signals with known pulse shapes @cite_35 , @cite_11 , @cite_29 : where @math is known. This problem reduces to finding the amplitudes @math and time delays @math . Under certain assumptions on the pulse @math it is possible to recover the amplitudes and shifts from a finite number of Fourier coefficients of @math , and therefore to reconstruct @math perfectly. The recovery process is a two step method. First the time-delays can be estimated using nonlinear techniques e.g. the annihilating filter method @cite_29 , as long as the number of measurements @math satisfies @math and the time-delays are distinct. Once the time delays are known, the amplitudes can be found via a least squares approach. The number of channels is motivated by the number of unknown parameters @math which equals @math .
|
{
"cite_N": [
"@cite_35",
"@cite_29",
"@cite_11"
],
"mid": [
"2138787877",
"",
"2074380172"
],
"abstract": [
"Signals comprised of a stream of short pulses appear in many applications including bioimaging and radar. The recent finite rate of innovation framework, has paved the way to low rate sampling of such pulses by noticing that only a small number of parameters per unit time are needed to fully describe these signals. Unfortunately, for high rates of innovation, existing sampling schemes are numerically unstable. In this paper we propose a general sampling approach which leads to stable recovery even in the presence of many pulses. We begin by deriving a condition on the sampling kernel which allows perfect reconstruction of periodic streams from the minimal number of samples. We then design a compactly supported class of filters, satisfying this condition. The periodic solution is extended to finite and infinite streams and is shown to be numerically stable even for a large number of pulses. High noise robustness is also demonstrated when the delays are sufficiently separated. Finally, we process ultrasound imaging data using our techniques and show that substantial rate reduction with respect to traditional ultrasound sampling schemes can be achieved.",
"",
"We consider minimal-rate sampling schemes for infinite streams of delayed and weighted versions of a known pulse shape. The minimal sampling rate for these parametric signals is referred to as the rate of innovation and is equal to the number of degrees of freedom per unit time. Although sampling of infinite pulse streams was treated in previous works, either the rate of innovation was not achieved, or the pulse shape was limited to Diracs. In this paper we propose a multichannel architecture for sampling pulse streams with arbitrary shape, operating at the rate of innovation. Our approach is based on modulating the input signal with a set of properly chosen waveforms, followed by a bank of integrators. This architecture is motivated by recent work on sub-Nyquist sampling of multiband signals. We show that the pulse stream can be recovered from the proposed minimal-rate samples using standard tools taken from spectral estimation in a stable way even at high rates of innovation. In addition, we address practical implementation issues, such as reduction of hardware complexity and immunity to failure in the sampling channels. The resulting scheme is flexible and exhibits better noise robustness than previous approaches."
]
}
|
1010.3132
|
2949171286
|
We develop sub-Nyquist sampling systems for analog signals comprised of several, possibly overlapping, finite duration pulses with unknown shapes and time positions. Efficient sampling schemes when either the pulse shape or the locations of the pulses are known have been previously developed. To the best of our knowledge, stable and low-rate sampling strategies for continuous signals that are superpositions of unknown pulses without knowledge of the pulse locations have not been derived. The goal in this paper is to fill this gap. We propose a multichannel scheme based on Gabor frames that exploits the sparsity of signals in time and enables sampling multipulse signals at sub-Nyquist rates. Moreover, if the signal is additionally essentially multiband, then the sampling scheme can be adapted to lower the sampling rate without knowing the band locations. We show that, with proper preprocessing, the necessary Gabor coefficients, can be recovered from the samples using standard methods of compressed sensing. In addition, we provide error estimates on the reconstruction and analyze the proposed architecture in the presence of noise.
|
The Fourier coefficients can be determined from samples of @math using a scheme similar to that of Fig. with @math channels and modulators @math with @math . In this case, the input-output relation becomes @math , where @math is a vector of length @math and @math is a vector of Fourier coefficients of @math of length @math . In @cite_11 the authors proposed a more general scheme based on mixing the modulations @math with proper coefficients, resulting in periodic waveforms, before applying them to the signal. The corresponding samples are weighted superpositions of the Fourier coefficients @math . When the weights are properly chosen, @math can be recovered and therefore the time-delays and amplitudes as well. We incorporate the idea of mixing the channels into our sampling system in the follow-up paper, and show that under certain conditions on the Gabor frame, our generalized system can be used to sample signals of the form ). We note here, that the system of @cite_11 is inefficient for our signal model, since it reduces to the Fourier series method, which does not take sparsity in time into account.
|
{
"cite_N": [
"@cite_11"
],
"mid": [
"2074380172"
],
"abstract": [
"We consider minimal-rate sampling schemes for infinite streams of delayed and weighted versions of a known pulse shape. The minimal sampling rate for these parametric signals is referred to as the rate of innovation and is equal to the number of degrees of freedom per unit time. Although sampling of infinite pulse streams was treated in previous works, either the rate of innovation was not achieved, or the pulse shape was limited to Diracs. In this paper we propose a multichannel architecture for sampling pulse streams with arbitrary shape, operating at the rate of innovation. Our approach is based on modulating the input signal with a set of properly chosen waveforms, followed by a bank of integrators. This architecture is motivated by recent work on sub-Nyquist sampling of multiband signals. We show that the pulse stream can be recovered from the proposed minimal-rate samples using standard tools taken from spectral estimation in a stable way even at high rates of innovation. In addition, we address practical implementation issues, such as reduction of hardware complexity and immunity to failure in the sampling channels. The resulting scheme is flexible and exhibits better noise robustness than previous approaches."
]
}
|
1010.2955
|
1997201895
|
In this paper, we address the subspace clustering problem. Given a set of data samples (vectors) approximately drawn from a union of multiple subspaces, our goal is to cluster the samples into their respective subspaces and remove possible outliers as well. To this end, we propose a novel objective function named Low-Rank Representation (LRR), which seeks the lowest rank representation among all the candidates that can represent the data samples as linear combinations of the bases in a given dictionary. It is shown that the convex program associated with LRR solves the subspace clustering problem in the following sense: When the data is clean, we prove that LRR exactly recovers the true subspace structures; when the data are contaminated by outliers, we prove that under certain conditions LRR can exactly recover the row space of the original data and detect the outlier as well; for data corrupted by arbitrary sparse errors, LRR can also approximately recover the row space with theoretical guarantees. Since the subspace membership is provably determined by the row space, these further imply that LRR can perform robust subspace clustering and error correction in an efficient and effective way.
|
In statistical learning, mixed data is typically modeled as a set of independent samples drawn from a mixture of probabilistic distributions. As a single subspace can be well modeled by a (degenerate) Gaussian distribution, it is straightforward to assume that each probabilistic distribution is Gaussian, i.e., adopting a mixture of Gaussian models. Then the problem of segmenting the data is converted to a model estimation problem. The estimation can be performed either by using the Expectation Maximization (EM) algorithm to find a maximum likelihood estimate, as done in @cite_20 , or by iteratively finding a min-max estimate, as adopted by K-subspaces @cite_4 and Random Sample Consensus (RANSAC) @cite_38 . These methods are sensitive to errors. So several efforts have been made for improving their robustness, e.g., the Median K-flats @cite_11 for K-subspaces, the work @cite_8 for RANSAC, and @cite_27 use a coding length to characterize a mixture of Gaussian. These refinements may introduce some robustness. Nevertheless, the problem is still not well solved due to the optimization difficulty, which is a bottleneck for these methods.
|
{
"cite_N": [
"@cite_38",
"@cite_4",
"@cite_8",
"@cite_27",
"@cite_20",
"@cite_11"
],
"mid": [
"2085261163",
"2125742596",
"2121148353",
"2164931791",
"2100075055",
"2009234819"
],
"abstract": [
"A new paradigm, Random Sample Consensus (RANSAC), for fitting a model to experimental data is introduced. RANSAC is capable of interpreting smoothing data containing a significant percentage of gross errors, and is thus ideally suited for applications in automated image analysis where interpretation is based on the data provided by error-prone feature detectors. A major portion of this paper describes the application of RANSAC to the Location Determination Problem (LDP): Given an image depicting a set of landmarks with known locations, determine that point in space from which the image was obtained. In response to a RANSAC requirement, new results are derived on the minimum number of landmarks needed to obtain a solution, and algorithms are presented for computing these minimum-landmark solutions in closed form. These results provide the basis for an automatic system that can solve the LDP under difficult viewing",
"We introduce two appearance-based methods for clustering a set of images of 3D (three-dimensional) objects, acquired under varying illumination conditions, into disjoint subsets corresponding to individual objects. The first algorithm is based on the concept of illumination cones. According to the theory, the clustering problem is equivalent to finding convex polyhedral cones in the high-dimensional image space. To efficiently determine the conic structures hidden in the image data, we introduce the concept of conic affinity, which measures the likelihood of a pair of images belonging to the same underlying polyhedral cone. For the second method, we introduce another affinity measure based on image gradient comparisons. The algorithm operates directly on the image gradients by comparing the magnitudes and orientations of the image gradient at each pixel. Both methods have clear geometric motivations, and they operate directly on the images without the need for feature extraction or computation of pixel statistics. We demonstrate experimentally that both algorithms are surprisingly effective in clustering images acquired under varying illumination conditions with two large, well-known image data sets.",
"We study the problem of estimating a mixed geometric model of multiple subspaces in the presence of a significant amount of outliers. The estimation of multiple subspaces is an important problem in computer vision, particularly for segmenting multiple motions in an image sequence. We first provide a comprehensive survey of robust statistical techniques in the literature, and identify three main approaches for detecting and rejecting outliers. Through a careful examination of these approaches, we propose and investigate three principled methods for robustly estimating mixed subspace models: random sample consensus, the influence function, and multivariate trimming. Using a benchmark synthetic experiment and a set of real image sequences, we conduct a thorough comparison of the three methods",
"In this paper, based on ideas from lossy data coding and compression, we present a simple but effective technique for segmenting multivariate mixed data that are drawn from a mixture of Gaussian distributions, which are allowed to be almost degenerate. The goal is to find the optimal segmentation that minimizes the overall coding length of the segmented data, subject to a given distortion. By analyzing the coding length rate of mixed data, we formally establish some strong connections of data segmentation to many fundamental concepts in lossy data compression and rate-distortion theory. We show that a deterministic segmentation is approximately the (asymptotically) optimal solution for compressing mixed data. We propose a very simple and effective algorithm that depends on a single parameter, the allowable distortion. At any given distortion, the algorithm automatically determines the corresponding number and dimension of the groups and does not involve any parameter estimation. Simulation results reveal intriguing phase-transition-like behaviors of the number of segments when changing the level of distortion or the amount of outliers. Finally, we demonstrate how this technique can be readily applied to segment real imagery and bioinformatic data.",
"Multibody factorization algorithms give an elegant and simple solution to the problem of structure from motion even for scenes containing multiple independent motions. Despite this elegance, it is still quite difficult to apply these algorithms to arbitrary scenes. First, their performance deteriorates rapidly with increasing noise. Second, they cannot be applied unless all the points can be tracked in all the frames (as will rarely happen in real scenes). Third, they cannot incorporate prior knowledge on the structure or the motion of the objects. In this paper we present a multibody factorization algorithm that can handle arbitrary noise covariance for each feature as well as missing data. We show how to formulate the problem as one of factor analysis and derive an expectation-maximization based maximum-likelihood algorithm. One of the advantages of our formulation is that we can easily incorporate prior knowledge, including the assumption of temporal coherence. We show that this assumption greatly enhances the robustness of our algorithm and present results on challenging sequences.",
"We describe the MedianK-flats (MKF) algorithm, a simple online method for hybrid linear modeling, i.e., for approximating data by a mixture of flats. This algorithm simultaneously partitions the data into clusters while finding their corresponding best approximating l 1 d-flats, so that the cumulative l 1 error is minimized. The current implementation restricts d-flats to be d-dimensional linear subspaces. It requires a negligible amount of storage, and its complexity, when modeling data consisting of N points in ℝD with K d-dimensional linear subspaces, is of order O(n s · K · d · D + n s · d2 · D), where n s is the number of iterations required for convergence (empirically on the order of 104). Since it is an online algorithm, data can be supplied to it incrementally and it can incrementally produce the corresponding output. The performance of the algorithm is carefully evaluated using synthetic and real data."
]
}
|
1010.2955
|
1997201895
|
In this paper, we address the subspace clustering problem. Given a set of data samples (vectors) approximately drawn from a union of multiple subspaces, our goal is to cluster the samples into their respective subspaces and remove possible outliers as well. To this end, we propose a novel objective function named Low-Rank Representation (LRR), which seeks the lowest rank representation among all the candidates that can represent the data samples as linear combinations of the bases in a given dictionary. It is shown that the convex program associated with LRR solves the subspace clustering problem in the following sense: When the data is clean, we prove that LRR exactly recovers the true subspace structures; when the data are contaminated by outliers, we prove that under certain conditions LRR can exactly recover the row space of the original data and detect the outlier as well; for data corrupted by arbitrary sparse errors, LRR can also approximately recover the row space with theoretical guarantees. Since the subspace membership is provably determined by the row space, these further imply that LRR can perform robust subspace clustering and error correction in an efficient and effective way.
|
Factorization based methods @cite_6 seek to approximate the given data matrix as a product of two matrices, such that the support pattern for one of the factors reveals the segmentation of the samples. In order to achieve robustness to noise, these methods modify the formulations by adding extra regularization terms. Nevertheless, such modifications usually lead to non-convex optimization problems, which need heuristic algorithms (often based on alternating minimization or EM-style algorithms) to solve. Getting stuck at local minima may undermine their performances, especially when the data is grossly corrupted. It will be shown that LRR can be regarded as a robust generalization of the method in @cite_6 (which is referred to as PCA in this paper). The formulation of LRR is convex and can be solved in polynomial time.
|
{
"cite_N": [
"@cite_6"
],
"mid": [
"2118154608"
],
"abstract": [
"The structure-from-motion problem has been extensively studied in the field of computer vision. Yet, the bulk of the existing work assumes that the scene contains only a single moving object. The more realistic case where an unknown number of objects move in the scene has received little attention, especially for its theoretical treatment. In this paper we present a new method for separating and recovering the motion and shape of multiple independently moving objects in a sequence of images. The method does not require prior knowledge of the number of objects, nor is dependent on any grouping of features into an object at the image level. For this purpose, we introduce a mathematical construct of object shapes, called the shape interaction matrix, which is invariant to both the object motions and the selection of coordinate systems. This invariant structure is computable solely from the observed trajectories of image features without grouping them into individual objects. Once the matrix is computed, it allows for segmenting features into objects by the process of transforming it into a canonical form, as well as recovering the shape and motion of each object. The theory works under a broad set of projection models (scaled orthography, paraperspective and affine) but they must be linear, so it excludes projective “cameras”."
]
}
|
1010.2955
|
1997201895
|
In this paper, we address the subspace clustering problem. Given a set of data samples (vectors) approximately drawn from a union of multiple subspaces, our goal is to cluster the samples into their respective subspaces and remove possible outliers as well. To this end, we propose a novel objective function named Low-Rank Representation (LRR), which seeks the lowest rank representation among all the candidates that can represent the data samples as linear combinations of the bases in a given dictionary. It is shown that the convex program associated with LRR solves the subspace clustering problem in the following sense: When the data is clean, we prove that LRR exactly recovers the true subspace structures; when the data are contaminated by outliers, we prove that under certain conditions LRR can exactly recover the row space of the original data and detect the outlier as well; for data corrupted by arbitrary sparse errors, LRR can also approximately recover the row space with theoretical guarantees. Since the subspace membership is provably determined by the row space, these further imply that LRR can perform robust subspace clustering and error correction in an efficient and effective way.
|
Generalized Principal Component Analysis (GPCA) @cite_36 presents an algebraic way to model the data drawn from a union of multiple subspaces. This method describes a subspace containing a data point by using the gradient of a polynomial at that point. Then subspace segmentation is made equivalent to fitting the data with polynomials. GPCA can guarantee the success of the segmentation under certain conditions, and it does not impose any restriction on the subspaces. However, this method is sensitive to noise due to the difficulty of estimating the polynomials from real data, which also causes the high computation cost of GPCA. Recently, Robust Algebraic Segmentation (RAS) @cite_7 has been proposed to resolve the robustness issue of GPCA. However, the computation difficulty for fitting polynomials is unfathomably large. So RAS can make sense only when the data dimension is low and the number of subspaces is small.
|
{
"cite_N": [
"@cite_36",
"@cite_7"
],
"mid": [
"2003361735",
"1981985525"
],
"abstract": [
"Recently many scientific and engineering applications have involved the challenging task of analyzing large amounts of unsorted high-dimensional data that have very complicated structures. From both geometric and statistical points of view, such unsorted data are considered mixed as different parts of the data have significantly different structures which cannot be described by a single model. In this paper we propose to use subspace arrangements—a union of multiple subspaces—for modeling mixed data: each subspace in the arrangement is used to model just a homogeneous subset of the data. Thus, multiple subspaces together can capture the heterogeneous structures within the data set. In this paper, we give a comprehensive introduction to a new approach for the estimation of subspace arrangements. This is known as generalized principal component analysis (GPCA). In particular, we provide a comprehensive summary of important algebraic properties and statistical facts that are crucial for making the inference of subspace arrangements both efficient and robust, even when the given data are corrupted by noise or contaminated with outliers. This new method in many ways improves and generalizes extant methods for modeling or clustering mixed data. There have been successful applications of this new method to many real-world problems in computer vision, image processing, and system identification. In this paper, we will examine several of those representative applications. This paper is intended to be expository in nature. However, in order that this may serve as a more complete reference for both theoreticians and practitioners, we take the liberty of filling in several gaps between the theory and the practice in the existing literature.",
"This paper studies segmentation of multiple rigid-body motions in a 3-D dynamic scene under perspective camera projection. We consider dynamic scenes that contain both 3-D rigid-body structures and 2-D planar structures. Based on the well-known epipolar and homography constraints between two views, we propose a hybrid perspective constraint (HPC) to unify the representation of rigid-body and planar motions. Given a mixture of K hybrid perspective constraints, we propose an algebraic process to partition image correspondences to the individual 3-D motions, called Robust Algebraic Segmentation (RAS). Particularly, we prove that the joint distribution of image correspondences is uniquely determined by a set of (2K)-th degree polynomials, a global signature for the union of K motions of possibly mixed type. The first and second derivatives of these polynomials provide a means to recover the association of the individual image samples to their respective motions. Finally, using robust statistics, we show that the polynomials can be robustly estimated in the presence of moderate image noise and outliers. We conduct extensive simulations and real experiments to validate the performance of the new algorithm. The results demonstrate that RAS achieves notably higher accuracy than most existing robust motion-segmentation methods, including random sample consensus (RANSAC) and its variations. The implementation of the algorithm is also two to three times faster than the existing methods. The implementation of the algorithm and the benchmark scripts are available at http: perception.csl.illinois.edu ras ."
]
}
|
1010.3007
|
2568356985
|
The existence of quantum uncertainty relations is the essential reason that some classically unrealizable cryptographic primitives become realizable when quantum communication is allowed. One operational manifestation of these uncertainty relations is a purely quantum effect referred to as information locking [ 2004]. A locking scheme can be viewed as a cryptographic protocol in which a uniformly random n-bit message is encoded in a quantum system using a classical key of size much smaller than n. Without the key, no measurement of this quantum state can extract more than a negligible amount of information about the message, in which case the message is said to be “locked”. Furthermore, knowing the key, it is possible to recover, that is “unlock”, the message. In this article, we make the following contributions by exploiting a connection between uncertainty relations and low-distortion embeddings of Euclidean spaces into slightly larger spaces endowed with the e1 norm. We introduce the notion of a metric uncertainty relation and connect it to low-distortion embeddings of e2 into e1. A metric uncertainty relation also implies an entropic uncertainty relation. We prove that random bases satisfy uncertainty relations with a stronger definition and better parameters than previously known. Our proof is also considerably simpler than earlier proofs. We then apply this result to show the existence of locking schemes with key size independent of the message length. Moreover, we give efficient constructions of bases satisfying metric uncertainty relations. The bases defining these metric uncertainty relations are computable by quantum circuits of almost linear size. This leads to the first explicit construction of a strong information locking scheme. These constructions are obtained by adapting an explicit norm embedding due to Indyk [2007] and an extractor construction of [2009]. We apply our metric uncertainty relations to exhibit communication protocols that perform equality testing of n-qubit states. We prove that this task can be performed by a single message protocol using O(log2 n) qubits and n bits of communication, where the computation of the sender is efficient.
|
Aubrun, Szarek and Werner @cite_17 @cite_29 also used a connection between low-distortion embeddings and quantum information. They show in @cite_17 that the existence of large subspaces of highly entangled states follows from Dvoretzky's theorem for the Schatten @math -norm The Schatten @math -norm of a matrix @math is defined as the @math norm of a vector of singular values of @math . for @math . This in turns shows the existence of channels that violate additivity of minimum output @math -R 'enyi entropy as was previously demonstrated by @cite_55 . Using a more delicate argument @cite_64 , they are also able to recover Hastings' @cite_81 counterexample to the additivity conjecture.
|
{
"cite_N": [
"@cite_64",
"@cite_55",
"@cite_29",
"@cite_81",
"@cite_17"
],
"mid": [
"1998018243",
"2071160621",
"",
"1568529095",
"2112866726"
],
"abstract": [
"The goal of this note is to show that Hastings’s counterexample to the additivity of minimal output von Neumann entropy can be readily deduced from a sharp version of Dvoretzky’s theorem.",
"For all p > 1, we demonstrate the existence of quantum channels with non-multiplicative maximal output p-norms. Equivalently, for all p > 1, the minimum output Renyi entropy of order p of a quantum channel is not additive. The violations found are large; in all cases, the minimum output Renyi entropy of order p for a product channel need not be significantly greater than the minimum output entropy of its individual factors. Since p = 1 corresponds to the von Neumann entropy, these counterexamples demonstrate that if the additivity conjecture of quantum information theory is true, it cannot be proved as a consequence of any channel-independent guarantee of maximal p-norm multiplicativity. We also show that a class of channels previously studied in the context of approximate encryption lead to counterexamples for all p > 2.",
"",
"The additivity conjecture of quantum information theory implies that entanglement cannot, even in principle, help to funnel more classical information through a quantum-communication channel. A counterexample shows that this conjecture is false.",
"The goal of this note is to show that the analysis of the minimum output p-Renyi entropy of a typical quantum channel essentially amounts to applying Milman’s version of Dvoretzky’s theorem about almost Euclidean sections of high-dimensional convex bodies. This conceptually simplifies the (nonconstructive) argument by Hayden–Winter, disproving the additivity conjecture for the minimal output p-Renyi entropy (for p>1)."
]
}
|
1010.3007
|
2568356985
|
The existence of quantum uncertainty relations is the essential reason that some classically unrealizable cryptographic primitives become realizable when quantum communication is allowed. One operational manifestation of these uncertainty relations is a purely quantum effect referred to as information locking [ 2004]. A locking scheme can be viewed as a cryptographic protocol in which a uniformly random n-bit message is encoded in a quantum system using a classical key of size much smaller than n. Without the key, no measurement of this quantum state can extract more than a negligible amount of information about the message, in which case the message is said to be “locked”. Furthermore, knowing the key, it is possible to recover, that is “unlock”, the message. In this article, we make the following contributions by exploiting a connection between uncertainty relations and low-distortion embeddings of Euclidean spaces into slightly larger spaces endowed with the e1 norm. We introduce the notion of a metric uncertainty relation and connect it to low-distortion embeddings of e2 into e1. A metric uncertainty relation also implies an entropic uncertainty relation. We prove that random bases satisfy uncertainty relations with a stronger definition and better parameters than previously known. Our proof is also considerably simpler than earlier proofs. We then apply this result to show the existence of locking schemes with key size independent of the message length. Moreover, we give efficient constructions of bases satisfying metric uncertainty relations. The bases defining these metric uncertainty relations are computable by quantum circuits of almost linear size. This leads to the first explicit construction of a strong information locking scheme. These constructions are obtained by adapting an explicit norm embedding due to Indyk [2007] and an extractor construction of [2009]. We apply our metric uncertainty relations to exhibit communication protocols that perform equality testing of n-qubit states. We prove that this task can be performed by a single message protocol using O(log2 n) qubits and n bits of communication, where the computation of the sender is efficient.
|
In a cryptographic setting, Damg a rd, Pedersen and Salvail @cite_49 used ideas related to locking to develop quantum ciphers that have the property that the key used for encryption can be recycled. In @cite_0 , they construct a quantum key recycling scheme (see also @cite_82 ) with near optimal parameters by encoding the message together with its authentication tag using a full set of mutually unbiased bases.
|
{
"cite_N": [
"@cite_0",
"@cite_49",
"@cite_82"
],
"mid": [
"1498523337",
"2949754091",
"2001213350"
],
"abstract": [
"Assuming an insecure quantum channel and an authenticated classical channel, we propose an unconditionally secure scheme for encrypting classical messages under a shared key, where attempts to eavesdrop the ciphertext can be detected. If no eavesdropping is detected, we can securely re-use the entire key for encrypting new messages. If eavesdropping is detected, we must discard a number of key bits corresponding to the length of the message, but can re-use almost all of the rest. We show this is essentially optimal. Thus, provided the adversary does not interfere (too much) with the quantum channel, we can securely send an arbitrary number of message bits, independently of the length of the initial key. Moreover, the key-recycling mechanism only requires one-bit feedback. While ordinary quantum key distribution with a classical one time pad could be used instead to obtain a similar functionality, this would need more rounds of interaction and more communication.",
"We consider the scenario where Alice wants to send a secret (classical) @math -bit message to Bob using a classical key, and where only one-way transmission from Alice to Bob is possible. In this case, quantum communication cannot help to obtain perfect secrecy with key length smaller then @math . We study the question of whether there might still be fundamental differences between the case where quantum as opposed to classical communication is used. In this direction, we show that there exist ciphers with perfect security producing quantum ciphertext where, even if an adversary knows the plaintext and applies an optimal measurement on the ciphertext, his Shannon uncertainty about the key used is almost maximal. This is in contrast to the classical case where the adversary always learns @math bits of information on the key in a known plaintext attack. We also show that there is a limit to how different the classical and quantum cases can be: the most probable key, given matching plain- and ciphertexts, has the same probability in both the quantum and the classical cases. We suggest an application of our results in the case where only a short secret key is available and the message is much longer.",
"Quantum information is a valuable resource which can be encrypted in order to protect it. We consider the size of the one-time pad that is needed to protect quantum information in a number of cases. The situation is dramatically different from the classical case: we prove that one can recycle the one-time pad without compromising security. The protocol for recycling relies on detecting whether eavesdropping has occurred, and further relies on the fact that information contained in the encrypted quantum state cannot be fully accessed. We prove the security of recycling rates when authentication of quantum states is accepted, and when it is rejected. We note that recycling schemes respect a general law of cryptography which we introduce relating the size of private keys, sent qubits, and encrypted messages. We discuss applications for encryption of quantum information in light of the resources needed for teleportation. Potential uses include the protection of resources such as entanglement and the memory of quantum computers. We also introduce another application: encrypted secret sharing and find that one can even reuse the private key that is used to encrypt a classical message. In a number of cases, one finds that the amount of private keymore » needed for authentication or protection is smaller than in the general case.« less"
]
}
|
1010.2686
|
2951077603
|
In this paper, we consider a particular class of selective fading channel corresponding to a channel that is selective either in time or in frequency. For this class of channel, we propose a systematic way to achieve the optimal DMT derived in Coronel and B "olcskei, IEEE ISIT, 2007 by extending the non-vanishing determinant (NVD) criterion to the selective channel case. A new code construction based on split NVD parallel codes is then proposed to satisfy the NVD parallel criterion. This result is of significant interest not only in its own right, but also because it settles a long-standing debate in the literature related to the optimal DMT of selective fading channels.
|
It turns out fron the geometrical interpretation that the outage event is reduced to the probability that the @math Jensen channel, denoted by @math in the rest of the paper, is in outage, which is the Jensen outage event in the Coronel and B "olcskei terminology @cite_0 . This means that the outage event is reduced to, @math Note that the straightforward generalization of the flat fading outage results to the block diagonal matrix in ) as in @cite_3 and @cite_2 does not take into account the impact of the coding among the channel blocks in the analytical outage derivation and does not lead to an accurate outage probability expression. In the following, we show how this optimal DMT can be achieved using a code derived from cyclic division algebra (CDA).
|
{
"cite_N": [
"@cite_0",
"@cite_3",
"@cite_2"
],
"mid": [
"",
"2129766733",
"2951411633"
],
"abstract": [
"",
"Multiple antennas can be used for increasing the amount of diversity or the number of degrees of freedom in wireless communication systems. We propose the point of view that both types of gains can be simultaneously obtained for a given multiple-antenna channel, but there is a fundamental tradeoff between how much of each any coding scheme can get. For the richly scattered Rayleigh-fading channel, we give a simple characterization of the optimal tradeoff curve and use it to evaluate the performance of existing multiple antenna schemes.",
"In this paper we investigate the criteria proposed by for constructing MIMO MAC-DMT optimal codes over several classes of fading channels. We first give a counterexample showing their DMT result is not correct when the channel is frequency-selective. For the case of symmetric MIMO-MAC flat fading channels, their DMT result reduces to exactly the same as that derived by , and we therefore focus on their criteria for constructing MAC-DMT optimal codes, especially when the number of receive antennas is sufficiently large. In such case, we show their criterion is equivalent to requiring the codes of any subset of users to satisfy a joint non-vanishing determinant criterion when the system operates in the antenna pooling regime. Finally an upper bound on the product of minimum eigenvalues of the difference matrices is provided, and is used to show any MIMO-MAC codes satisfying their criterion can possibly exist only when the target multiplexing gain is small."
]
}
|
1010.1868
|
1651891458
|
Actors in realistic social networks play not one but a number of diverse roles depending on whom they interact with, and a large number of such role-specific interactions collectively determine social communities and their organizations. Methods for analyzing social networks should capture these multi-faceted role-specific interactions, and, more interestingly, discover the latent organization or hierarchy of social communities. We propose a hierarchical Mixed Membership Stochastic Blockmodel to model the generation of hierarchies in social communities, selective membership of actors to subsets of these communities, and the resultant networks due to within- and cross-community interactions. Furthermore, to automatically discover these latent structures from social networks, we develop a Gibbs sampling algorithm for our model. We conduct extensive validation of our model using synthetic networks, and demonstrate the utility of our model in real-world datasets such as predator-prey networks and citation networks.
|
While there has been a great deal of work on graph clustering and community structure inference @cite_11 @cite_10 @cite_17 @cite_18 @cite_6 , probabilistic generative models for hierarchical community formation and latent multi-role inference of every actor and link have just started to draw attention.
|
{
"cite_N": [
"@cite_18",
"@cite_11",
"@cite_6",
"@cite_10",
"@cite_17"
],
"mid": [
"2047940964",
"1971421925",
"2017987256",
"1985381446",
""
],
"abstract": [
"The discovery and analysis of community structure in networks is a topic of considerable recent interest within the physics community, but most methods proposed so far are unsuitable for very large networks because of their computational cost. Here we present a hierarchical agglomeration algorithm for detecting community structure which is faster than many competing algorithms: its running time on a network with n vertices and m edges is O(m d log n) where d is the depth of the dendrogram describing the community structure. Many real-world networks are sparse and hierarchical, with m n and d log n, in which case our algorithm runs in essentially linear time, O(n log^2 n). As an example of the application of this algorithm we use it to analyze a network of items for sale on the web-site of a large online retailer, items in the network being linked if they are frequently purchased by the same buyer. The network has more than 400,000 vertices and 2 million edges. We show that our algorithm can extract meaningful communities from this network, revealing large-scale patterns present in the purchasing habits of customers.",
"A number of recent studies have focused on the statistical properties of networked systems such as social networks and the Worldwide Web. Researchers have concentrated particularly on a few properties that seem to be common to many networks: the small-world property, power-law degree distributions, and network transitivity. In this article, we highlight another property that is found in many networks, the property of community structure, in which network nodes are joined together in tightly knit groups, between which there are only looser connections. We propose a method for detecting such communities, built around the idea of using centrality indices to find community boundaries. We test our method on computer-generated and real-world graphs whose community structure is already known and find that the method detects this known structure with high sensitivity and reliability. We also apply the method to two networks whose community structure is not well known—a collaboration network and a food web—and find that it detects significant and informative community divisions in both cases.",
"High-throughput techniques are leading to an explosive growth in the size of biological databases and creating the opportunity to revolutionize our understanding of life and disease. Interpretation of these data remains, however, a major scientific challenge. Here, we propose a methodology that enables us to extract and display information contained in complex networks1,2,3. Specifically, we demonstrate that we can find functional modules4,5 in complex networks, and classify nodes into universal roles according to their pattern of intra- and inter-module connections. The method thus yields a ‘cartographic representation’ of complex networks. Metabolic networks6,7,8 are among the most challenging biological networks and, arguably, the ones with most potential for immediate applicability9. We use our method to analyse the metabolic networks of twelve organisms from three different superkingdoms. We find that, typically, 80 of the nodes are only connected to other nodes within their respective modules, and that nodes with different roles are affected by different evolutionary constraints and pressures. Remarkably, we find that metabolites that participate in only a few reactions but that connect different modules are more conserved than hubs whose links are mostly within a single module.",
"Compartments1 in food webs are subgroups of taxa in which many strong interactions occur within the subgroups and few weak interactions occur between the subgroups2. Theoretically, compartments increase the stability in networks1,2,3,4,5, such as food webs. Compartments have been difficult to detect in empirical food webs because of incompatible approaches6,7,8,9 or insufficient methodological rigour8,10,11. Here we show that a method for detecting compartments from the social networking science12,13,14 identified significant compartments in three of five complex, empirical food webs. Detection of compartments was influenced by food web resolution, such as interactions with weights. Because the method identifies compartmental boundaries in which interactions are concentrated, it is compatible with the definition of compartments. The method is rigorous because it maximizes an explicit function, identifies the number of non-overlapping compartments, assigns membership to compartments, and tests the statistical significance of the results12,13,14. A graphical presentation14 reveals systemic relationships and taxa-specific positions as structured by compartments. From this graphic, we explore two scenarios of disturbance to develop a hypothesis for testing how compartmentalized interactions increase stability in food webs15,16,17.",
""
]
}
|
1010.1868
|
1651891458
|
Actors in realistic social networks play not one but a number of diverse roles depending on whom they interact with, and a large number of such role-specific interactions collectively determine social communities and their organizations. Methods for analyzing social networks should capture these multi-faceted role-specific interactions, and, more interestingly, discover the latent organization or hierarchy of social communities. We propose a hierarchical Mixed Membership Stochastic Blockmodel to model the generation of hierarchies in social communities, selective membership of actors to subsets of these communities, and the resultant networks due to within- and cross-community interactions. Furthermore, to automatically discover these latent structures from social networks, we develop a Gibbs sampling algorithm for our model. We conduct extensive validation of our model using synthetic networks, and demonstrate the utility of our model in real-world datasets such as predator-prey networks and citation networks.
|
As mentioned earlier, the MMSB model @cite_16 enables inference of the latent roles of every actor and link in a network, but it cannot capture hierarchical structures of possible communities in the network. The link prediction model in @cite_12 employs an Indian Buffet Process prior over actor positions in an infinite-dimensional latent feature space. In that respect, it may be thought of as a nonparametric extension of the MMSB. However, the goal of their model is missing link prediction rather than inference of latent organizational structure.
|
{
"cite_N": [
"@cite_16",
"@cite_12"
],
"mid": [
"2107107106",
"2158535911"
],
"abstract": [
"Consider data consisting of pairwise measurements, such as presence or absence of links between pairs of objects. These data arise, for instance, in the analysis of protein interactions and gene regulatory networks, collections of author-recipient email, and social networks. Analyzing pairwise measurements with probabilistic models requires special assumptions, since the usual independence or exchangeability assumptions no longer hold. Here we introduce a class of variance allocation models for pairwise measurements: mixed membership stochastic blockmodels. These models combine global parameters that instantiate dense patches of connectivity (blockmodel) with local parameters that instantiate node-specific variability in the connections (mixed membership). We develop a general variational inference algorithm for fast approximate posterior inference. We demonstrate the advantages of mixed membership stochastic blockmodels with applications to social networks and protein interaction networks.",
"As the availability and importance of relational data—such as the friendships summarized on a social networking website—increases, it becomes increasingly important to have good models for such data. The kinds of latent structure that have been considered for use in predicting links in such networks have been relatively limited. In particular, the machine learning community has focused on latent class models, adapting Bayesian nonparametric methods to jointly infer how many latent classes there are while learning which entities belong to each class. We pursue a similar approach with a richer kind of latent variable—latent features—using a Bayesian nonparametric approach to simultaneously infer the number of features at the same time we learn which entities have each feature. Our model combines these inferred features with known covariates in order to perform link prediction. We demonstrate that the greater expressiveness of this approach allows us to improve performance on three datasets."
]
}
|
1010.1524
|
1678749519
|
Applications such as traffic engineering and network provisioning can greatly benefit from knowing, in real time, what is the largest input rate at which it is possible to transmit on a given path without causing congestion. We consider a probabilistic formulation for available bandwidth where the user specifies the probability of achieving an output rate almost as large as the input rate. We are interested in estimating and tracking the network-wide probabilistic available bandwidth (PAB) on multiple paths simultaneously with minimal overhead on the network. We propose a novel framework based on chirps, Bayesian inference, belief propagation and active sampling to estimate the PAB. We also consider the time evolution of the PAB by forming a dynamic model and designing a tracking algorithm based on particle filters. We implement our method in a lightweight and practical tool that has been deployed on the PlanetLab network to do online experiments. We show through these experiments and simulations that our approach outperforms block-based algorithms in terms of input rate cost and probability of successful transmission.
|
For real-time estimation, the proposed techniques @cite_16 @cite_22 @cite_0 @cite_18 @cite_14 use Kalman filtering by taking advantage of the piecewise linear relation between utilization and available bandwidth. The main drawback of the Kalman filter is that conditional probability distributions have to be Gaussian-linear. use Vertical Horizontal Filtering that ignores sharp and non-persistent changes but quickly converges to the new value if they do persist @cite_25 . However, since their tool was only tested in an environment with constant bit-rate cross-traffic, its performance for tracking is unknown.
|
{
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_22",
"@cite_0",
"@cite_16",
"@cite_25"
],
"mid": [
"2164884625",
"1487153207",
"2092024955",
"2123312764",
"2157603331",
"2129053713"
],
"abstract": [
"In this paper, we propose a practical, efficient and real-time technique to estimate the end-to-end available bandwidth in the condition of probe packets losing. One of main ideas of the proposed technology is compensation mechanism, which uses a least square method to approximately compensate the losing packets based on polynomial-mimesis technology in the communication channel. The second is that it uses the relationship between utilization and the sending rate of the probe packet when each probe packet needs to line up. The last one is that paper uses Kalman Filter to rebuild the linear system between utilization and rate, and then calculate the available bandwidth. Theoretical analysis and experimental results show that the proposed technique is effective.",
"In this paper we address the problem of online bandwidth estimation in wired and wireless LANs. To this end, we employ active probing, i.e. we continuously inject packet probes into the network. We present the key challenges and analyze the trade-offs between fast change detection and estimate smoothness. We show the benefit of using Kalman filtering to obtain optimal estimates under certain conditions and provide a procedure for parameterizing the filter with respect to specific use cases. Furthermore, we evaluate the influence of probing train length on the results. Based on our findings we developed a tool implementing the presented methodology. We support our theoretical results by a number of real-world measurements as well as simulations.",
"This paper presents a filter-based method BART (Bandwidth Available in Real-Time) for real-time estimation of end-to-end available bandwidth in packet-switched communication networks. BART relies on self-induced congestion, and repeatedly samples the available bandwidth of the network path with sequences of probe-packet pairs. The method is light-weight with respect to computation and memory requirements, and performs well when only a small amount of probe traffic is injected. BART uses Kalman filtering, which enables real-time estimation. It maintains a current estimate, which is incrementally improved with each new measurement of the inter-packet time separation in a sequence of probe-packet pairs. It is possible to tune BART according to specific needs. The estimation performance can be significantly enhanced by employing a change-detection technique. An implementation of BART has been evaluated in a physical test network with carefully controlled cross traffic. In addition, experiments have been performed over the Internet as well as over a mobile broadband connection.",
"The available bandwidth (AB) of an end-to-end path is its remaining capacity and it is an important metric for several applications. That's why several available bandwidth estimation tools have been published recently. Most of these tools use the probe rate model. This model is based on the concept of self- induced congestion and requires that the tools send a packet train at a rate matching the available bandwidth. The main issue with this model is that these tools congest the path under study. In this paper we present a novel available bandwidth estimation tool that takes into account this issue. Our tool is based on a mathematical model that sends packet trains at a rate lower than the AB. The main drawback of this model is that it is not able to track the AB. To solve this issue we propose to apply Kalman filters (KF) to the model. By applying these filters we can produce real-time estimations of the available bandwidth and monitor its changes. In addition the KFs are able to filter the noisy (erroneous) measurements improving the overall accuracy. We also present an extensive evaluation of our tool in different network scenarios and we compare its performance with that of pathChirp (a state-of-the-art available bandwidth estimation tool).",
"This paper presents a new method, BART (Bandwidth Available in Real-Time), for estimating the end-to-end available bandwidth over a network path. It estimates bandwidth quasi-continuously, in real-time. The method has also been implemented as a tool. It relies on self-induced congestion, and repeatedly samples the available bandwidth of the network path with sequences of probe packet pairs, sent at randomized rates. BART requires little computation in each iteration, is light-weight with respect to memory requirements, and adds only a small amount of probe traffic. The BART method uses Kalman filtering, which enables real-time estimation (a.k.a. tracking). It maintains a current estimate, which is incrementally improved with each new measurement of the inter-packet time separations in a sequence of probe packet pairs. The measurement model has a strong non-linearity, and would not at first sight be considered suitable for Kalman filtering, but we show how this non-linearity can be handled. BART may be tuned according to the specific needs of the measurement application, such as agility vs. stability of the estimate. We have tested an implementation of BART in a physical test network with carefully controlled cross traffic, with good accuracy and agreement. Test measurements have also been performed over the Internet. We compare the performance of BART with that of pathChirp, a state-of-the-art tool for measuring end-to-end available bandwidth in real-time.",
"End-to-end available bandwidth estimation is very important for bandwidth dependent applications, quality of service verification and traffic engineering. Although several techniques and tools have been developed in the past, producing reliable estimations in real-time still remains challenging -- it is necessary to ensure that the measurement process is accurate, non-intrusive and robust to non-deterministic delays or traffic bursts. This paper presents ASSOLO, a new active probing tool for estimating available bandwidth based on the concept of self-induced congestion''. ASSOLO features a new probing traffic profile called (Reflected ExponentiAl Chirp), which tests a wide range of rates being more accurate in the center of the probing interval. Moreover, the tool runs inside a real-time operating system and uses some de-noising techniques to improve the measurement process. Experimental results show that ASSOLO outperforms pathChirp, a state-of-the-art measurement tool, estimating available bandwidth with greater accuracy and stability."
]
}
|
1010.1524
|
1678749519
|
Applications such as traffic engineering and network provisioning can greatly benefit from knowing, in real time, what is the largest input rate at which it is possible to transmit on a given path without causing congestion. We consider a probabilistic formulation for available bandwidth where the user specifies the probability of achieving an output rate almost as large as the input rate. We are interested in estimating and tracking the network-wide probabilistic available bandwidth (PAB) on multiple paths simultaneously with minimal overhead on the network. We propose a novel framework based on chirps, Bayesian inference, belief propagation and active sampling to estimate the PAB. We also consider the time evolution of the PAB by forming a dynamic model and designing a tracking algorithm based on particle filters. We implement our method in a lightweight and practical tool that has been deployed on the PlanetLab network to do online experiments. We show through these experiments and simulations that our approach outperforms block-based algorithms in terms of input rate cost and probability of successful transmission.
|
These tools cannot be applied directly to scenarios where the available bandwidths of multiple paths have to be simultaneously estimated. The probes will generate interference on links shared by multiple paths, which can lead to significant underestimation, and also introduce an unacceptable overhead and overload on the network and the hosts @cite_3 . Alternatively, each path can be probed independently in a sequence rather than simultaneously. This approach is not only time-consuming but also very inefficient since it does not take advantage of the notable correlations in the AB when links are shared among paths. The techniques that have been proposed for large scale scenarios do rely on the correlations between links or even between the various metric (route, number of hops, capacity) to reduce the number of probes required to produce accurate estimates @cite_21 @cite_23 @cite_5 . However, all of them are limited to estimating and not tracking the available bandwidth. Multi-path tracking has been proposed for other metrics; Coates and Nowak use sequential Monte Carlo inference to estimate and track internal delay characteristics @cite_19 .
|
{
"cite_N": [
"@cite_21",
"@cite_3",
"@cite_19",
"@cite_23",
"@cite_5"
],
"mid": [
"2054149522",
"1769629372",
"1530558628",
"2162658475",
"2130392113"
],
"abstract": [
"Recent progress in active measurement techniques has made it possible to estimate end-to-end path available bandwidth. However, how to efficiently obtain available bandwidth information for the N2 paths in a large N-node system remains an open problem. While researchers have developed coordinate-based models that allow any node to quickly and accurately estimate latency in a scalable fashion, no such models exist for available bandwidth. In this paper we introduce BRoute--a scalable available bandwidth estimation system that is based on a route sharing model. The characteristics of BRoute are that its overhead is linear with the number of end nodes in the system, and that it requires only limited cooperation among end nodes. BRoute leverages the fact that most Internet bottlenecks are on path edges, and that edges are shared by many different paths. It uses AS-level source and sink trees to characterize and infer path-edge sharing in a scalable fashion. In this paper, we describe the BRoute architecture and evaluate the performance of its components. Initial experiments show that BRoute can infer path edges with an accuracy of over 80 . In a small case study on Planetlab, 80 of the available bandwidth estimates obtained from BRoute are accurate within 50 .",
"In recent years the research community has developed many techniques to estimate the end-to-end available bandwidth of an Internet path. This important metric can be potentially exploited to optimize the performance of several distributed systems and, even, to improve the effectiveness of the congestion control mechanism of TCP. Thus, it has been suggested that some existing estimation techniques could be used for this purpose. However, existing tools were not designed for large-scale deployments and were mostly validated in controlled settings, considering only one measurement running at a time. In this paper, we argue that current tools, while offering good estimates when used alone, might not work in large-scale systems where several estimations severely interfere with each other. We analyze the properties of the measurement paradigms employed today and discuss their functioning, study their overhead and analyze their interference. Our testbed results show that current techniques are insufficient as they are. Finally, we will discuss and propose some principles that should be taken into account for including available bandwidth measurements in large-scale distributed systems.",
"On-line, spatially localized information about internal network performance can greatly assist dynamic routing algorithms and traffic transmission protocols. However, it is impractical to measure network traffic at all points in the network. A promising alternative is to measure only at the edge of the network and infer internal behavior from these measurements. In this paper we concentrate on the estimation and localization of internal delays based on end-to-end delay measurements from a source to receivers. We propose a sequential Monte Carlo (SMC) procedure capable of tracking nonstationary network behavior and estimating time-varying, internal delay characteristics. Simulation experiments demonstrate the performance of the SMC approach.",
"This paper presents a new bandwidth inference mechanism that can be used to predict bandwidth across two nodes on the Internet. We used simulation and actual implementation on PlanetLab to compare the performance of the proposed mechanism against an existing approach. The results indicate that our approach is lightweight and yields better performance.",
"With the ever growing size of the Internet and increasing popularity of the overlay and peer-to-peer networks, scalable end-to-end (e2e) network monitoring is essential for better network management and application performance. For large scale networks, an e2e monitoring infrastructure should minimize the measurement cost while ensuring that the network is still monitored at fine enough time-scales required for each application flow. We explore the relationships between different e2e network metrics with the aim of leveraging such relationships for reducing monitoring costs while maintaining measurement accuracy. We analyze long range network measurements from PlanetLab, where we collected e2e network data (route, number of hops, capacity bandwidth and available bandwidth) for about two years on several thousand paths. We also present a few schemes to leverage the metric correlations and reduce the monitoring cost. Our preliminary results indicate that in some cases, we can reduce the monitoring costs by 75 while maintaining the accuracy at about 88 ."
]
}
|
1010.1526
|
2027846664
|
To classify time series by nearest neighbors, we need to specify or learn one or several distance measures. We consider variations of the Mahalanobis distance measures which rely on the inverse covariance matrix of the data. Unfortunately--for time series data--the covariance matrix has often low rank. To alleviate this problem we can either use a pseudoinverse, covariance shrinking or limit the matrix to its diagonal. We review these alternatives and benchmark them against competitive methods such as the related Large Margin Nearest Neighbor Classification (LMNN) and the Dynamic Time Warping (DTW) distance. As we expected, we find that the DTW is superior, but the Mahalanobis distance measures are one to two orders of magnitude faster. To get best results with Mahalanobis distance measures, we recommend learning one distance measure per class using either covariance shrinking or the diagonal approach.
|
Several distance functions are used for time series classification, such as Dynamic Time Warping (DTW) , DISSIM , Threshold Queries , Edit distances , Longest Common Subsequences (LCSS) , Swale , SpADe , and Cluster, Then Classify (CTC) . @cite_8 presented an extensive comparison of these distance functions and concluded that DTW is among the best measures and that the accuracy of the Euclidean distance converges to DTW as the size of the training set increases.
|
{
"cite_N": [
"@cite_8"
],
"mid": [
"2098759488"
],
"abstract": [
"The last decade has witnessed a tremendous growths of interests in applications that deal with querying and mining of time series data. Numerous representation methods for dimensionality reduction and similarity measures geared towards time series have been introduced. Each individual work introducing a particular method has made specific claims and, aside from the occasional theoretical justifications, provided quantitative experimental observations. However, for the most part, the comparative aspects of these experiments were too narrowly focused on demonstrating the benefits of the proposed methods over some of the previously introduced ones. In order to provide a comprehensive validation, we conducted an extensive set of time series experiments re-implementing 8 different representation methods and 9 similarity measures and their variants, and testing their effectiveness on 38 time series data sets from a wide variety of application domains. In this paper, we give an overview of these different techniques and present our comparative experimental findings regarding their effectiveness. Our experiments have provided both a unified validation of some of the existing achievements, and in some cases, suggested that certain claims in the literature may be unduly optimistic."
]
}
|
1010.1526
|
2027846664
|
To classify time series by nearest neighbors, we need to specify or learn one or several distance measures. We consider variations of the Mahalanobis distance measures which rely on the inverse covariance matrix of the data. Unfortunately--for time series data--the covariance matrix has often low rank. To alleviate this problem we can either use a pseudoinverse, covariance shrinking or limit the matrix to its diagonal. We review these alternatives and benchmark them against competitive methods such as the related Large Margin Nearest Neighbor Classification (LMNN) and the Dynamic Time Warping (DTW) distance. As we expected, we find that the DTW is superior, but the Mahalanobis distance measures are one to two orders of magnitude faster. To get best results with Mahalanobis distance measures, we recommend learning one distance measure per class using either covariance shrinking or the diagonal approach.
|
In a general Machine Learning setting, @cite_0 @cite_11 compared Euclidean distance with the conventional and class-based Mahalanobis distances. One of our contribution is to validate these generic results on time series: instead of tens of features, we have hundreds or even thousands of values which makes the problem mathematically more challenging: the rank of our covariance matrices are often tiny compared to their sizes.
|
{
"cite_N": [
"@cite_0",
"@cite_11"
],
"mid": [
"2071914327",
"2166987598"
],
"abstract": [
"Abstract A class-dependent weighted (CDW) dissimilarity measure in vector spaces is proposed to improve the performance of the nearest neighbor (NN) classifier. In order to optimize the required weights, an approach based on Fractional Programming is presented. Experiments with several standard benchmark data sets show the effectiveness of the proposed technique.",
"A prototype reduction algorithm is proposed which simultaneous train both a reduced set of prototypes and a suitable local metric for these prototypes. Starting with an initial selection of a small number of prototypes, it iteratively adjusts both the position (features) of these prototypes and the corresponding local-metric weights. The resulting prototypes metric combination minimizes a suitable estimation of the classification error probability. Good performance of this algorithm is assessed through experiments with a number of benchmark data sets and through a real two-class classification task which consists of detecting human faces in unrestricted-background pictures."
]
}
|
1010.1526
|
2027846664
|
To classify time series by nearest neighbors, we need to specify or learn one or several distance measures. We consider variations of the Mahalanobis distance measures which rely on the inverse covariance matrix of the data. Unfortunately--for time series data--the covariance matrix has often low rank. To alleviate this problem we can either use a pseudoinverse, covariance shrinking or limit the matrix to its diagonal. We review these alternatives and benchmark them against competitive methods such as the related Large Margin Nearest Neighbor Classification (LMNN) and the Dynamic Time Warping (DTW) distance. As we expected, we find that the DTW is superior, but the Mahalanobis distance measures are one to two orders of magnitude faster. To get best results with Mahalanobis distance measures, we recommend learning one distance measure per class using either covariance shrinking or the diagonal approach.
|
More generally, distance metric learning has an extensive literature . We refer the reader to @cite_6 for a review. A conventional distance-learning approach is to find an optimal generalized ellipsoid distance with respect to a specific loss function. The LMNN algorithm proposed by @cite_6 takes a different approach. It seeks to force nearest neighbors to belong to the same class and it separates instances from different classes by a large margin. LMNN can be formulated as a semi-definite programming problem. They also propose a modification which they call multiple metrics LMNN as it learns different distances for each class.
|
{
"cite_N": [
"@cite_6"
],
"mid": [
"2106053110"
],
"abstract": [
"The accuracy of k-nearest neighbor (kNN) classification depends significantly on the metric used to compute distances between different examples. In this paper, we show how to learn a Mahalanobis distance metric for kNN classification from labeled examples. The Mahalanobis metric can equivalently be viewed as a global linear transformation of the input space that precedes kNN classification using Euclidean distances. In our approach, the metric is trained with the goal that the k-nearest neighbors always belong to the same class while examples from different classes are separated by a large margin. As in support vector machines (SVMs), the margin criterion leads to a convex optimization based on the hinge loss. Unlike learning in SVMs, however, our approach requires no modification or extension for problems in multiway (as opposed to binary) classification. In our framework, the Mahalanobis distance metric is obtained as the solution to a semidefinite program. On several data sets of varying size and difficulty, we find that metrics trained in this way lead to significant improvements in kNN classification. Sometimes these results can be further improved by clustering the training examples and learning an individual metric within each cluster. We show how to learn and combine these local metrics in a globally integrated manner."
]
}
|
1010.1526
|
2027846664
|
To classify time series by nearest neighbors, we need to specify or learn one or several distance measures. We consider variations of the Mahalanobis distance measures which rely on the inverse covariance matrix of the data. Unfortunately--for time series data--the covariance matrix has often low rank. To alleviate this problem we can either use a pseudoinverse, covariance shrinking or limit the matrix to its diagonal. We review these alternatives and benchmark them against competitive methods such as the related Large Margin Nearest Neighbor Classification (LMNN) and the Dynamic Time Warping (DTW) distance. As we expected, we find that the DTW is superior, but the Mahalanobis distance measures are one to two orders of magnitude faster. To get best results with Mahalanobis distance measures, we recommend learning one distance measure per class using either covariance shrinking or the diagonal approach.
|
There are many extensions and alternatives to NN classification. For example, @cite_10 use instance weights to improve classification. Meanwhile, @cite_7 learn a distance per instance.
|
{
"cite_N": [
"@cite_10",
"@cite_7"
],
"mid": [
"2170613853",
"2004011892"
],
"abstract": [
"The performance of Nearest Neighbor (NN) classifier is known to be sensitive to the distance (or similarity) function used in classifying a test instance. Another major disadvantage of NN is that it uses all training instances in the generalization phase. This can cause slow execution speed and high storage requirement when dealing with large datasets. In the past research, many solutions have been proposed to handle one or both of the above problems. In the scheme proposed in this paper, we tackle both of these problems by assigning a weight to each training instance. The weight of a training instance is used in the generalization phase to calculate the distance (or similarity) of a query pattern to that instance. The basic NN classifier can be viewed as a special case of this scheme that treats all instances equally (by assigning equal weight to all training instances). Using this form of weighted similarity measure, we propose a learning algorithm that attempts to maximize the leave-one-out (LV1) classification rate of the NN rule by adjusting the weights of the training instances. At the same time, the algorithm reduces the size of the training set and can be viewed as a powerful instance reduction technique. An instance having zero weight is not used in the generalization phase and can be virtually removed from the training set. We show that our scheme has comparable or better performance than some recent methods proposed in the literature for the task of learning the distance function and or prototype reduction.",
"In many real-world applications, such as image retrieval, it would be natural to measure the distances from one instance to others using instance specific distance which captures the distinctions from the perspective of the concerned instance. However, there is no complete framework for learning instance specific distances since existing methods are incapable of learning such distances for test instance and unlabeled data. In this paper, we propose the Isd method to address this issue. The key of Isd is metric propagation, that is, propagating and adapting metrics of individual labeled examples to individual unlabeled instances. We formulate the problem into a convex optimization framework and derive efficient solutions. Experiments show that Isd can effectively learn instance specific distances for labeled as well as unlabeled instances. The metric propagation scheme can also be used in other scenarios."
]
}
|
1010.1526
|
2027846664
|
To classify time series by nearest neighbors, we need to specify or learn one or several distance measures. We consider variations of the Mahalanobis distance measures which rely on the inverse covariance matrix of the data. Unfortunately--for time series data--the covariance matrix has often low rank. To alleviate this problem we can either use a pseudoinverse, covariance shrinking or limit the matrix to its diagonal. We review these alternatives and benchmark them against competitive methods such as the related Large Margin Nearest Neighbor Classification (LMNN) and the Dynamic Time Warping (DTW) distance. As we expected, we find that the DTW is superior, but the Mahalanobis distance measures are one to two orders of magnitude faster. To get best results with Mahalanobis distance measures, we recommend learning one distance measure per class using either covariance shrinking or the diagonal approach.
|
@cite_9 proposed a weighted version of the DTW called Adaptable Time Warping. Instead of computing @math , it computes @math where @math is some matrix. Unfortunately, finding the optimal matrix @math can be a challenge. @cite_1 investigated another form of weighted DTW where you seek the minimize where @math is some weight vector. Many other variations on the DTW distance have been proposed, e.g., @cite_4 .
|
{
"cite_N": [
"@cite_9",
"@cite_1",
"@cite_4"
],
"mid": [
"2139661955",
"2008348094",
"1978551333"
],
"abstract": [
"Most machine learning and data mining algorithms for time series datasets need a suitable distance measure. In addition to classic p-norm distance, numerous other distance measures exist and the most popular is Dynamic Time Warping. Here we propose a new distance measure, called Adaptable Time Warping (ATW), which generalizes all previous time warping distances. We present a learning process using a genetic algorithm that adapts ATW in a locally optimal way, according to the current classification issue we have to resolve. It?s possible to prove that ATW with optimal parameters is at least equivalent or at best superior to the other time warping distances for all classification problems. We show this assertion by performing comparative tests on two real datasets. The originality of this work is that we propose a whole learning process directly based on the distance measure rather than on the time series themselves.",
"Dynamic time warping (DTW), which finds the minimum path by providing non-linear alignments between two time series, has been widely used as a distance measure for time series classification and clustering. However, DTW does not account for the relative importance regarding the phase difference between a reference point and a testing point. This may lead to misclassification especially in applications where the shape similarity between two sequences is a major consideration for an accurate recognition. Therefore, we propose a novel distance measure, called a weighted DTW (WDTW), which is a penalty-based DTW. Our approach penalizes points with higher phase difference between a reference point and a testing point in order to prevent minimum distance distortion caused by outliers. The rationale underlying the proposed distance measure is demonstrated with some illustrative examples. A new weight function, called the modified logistic weight function (MLWF), is also proposed to systematically assign weights as a function of the phase difference between a reference point and a testing point. By applying different weights to adjacent points, the proposed algorithm can enhance the detection of similarity between two time series. We show that some popular distance measures such as DTW and Euclidean distance are special cases of our proposed WDTW measure. We extend the proposed idea to other variants of DTW such as derivative dynamic time warping (DDTW) and propose the weighted version of DDTW. We have compared the performances of our proposed procedures with other popular approaches using public data sets available through the UCR Time Series Data Mining Archive for both time series classification and clustering problems. The experimental results indicate that the proposed approaches can achieve improved accuracy for time series classification and clustering problems.",
"The most widely used measures of time series proximity are the Euclidean distance and dynamic time warping. The latter can be derived from the distance introduced by Maurice Frechet in 1906 to account for the proximity between curves. The major limitation of these proximity measures is that they are based on the closeness of the values regardless of the similarity w.r.t. the growth behavior of the time series. To alleviate this drawback we propose a new dissimilarity index, based on an automatic adaptive tuning function, to include both proximity measures w.r.t. values and w.r.t. behavior. A comparative numerical analysis between the proposed index and the classical distance measures is performed on the basis of two datasets: a synthetic dataset and a dataset from a public health study."
]
}
|
1010.0014
|
2949266824
|
In this paper modified variants of the sparse Fourier transform algorithms from [14] are presented which improve on the approximation error bounds of the original algorithms. In addition, simple methods for extending the improved sparse Fourier transforms to higher dimensional settings are developed. As a consequence, approximate Fourier transforms are obtained which will identify a near-optimal k-term Fourier series for any given input function, @math time (neglecting logarithmic factors). Faster randomized Fourier algorithm variants with runtime complexities that scale linearly in the sparsity parameter k are also presented.
|
Let @math be the @math Discrete Fourier Transform (DFT) matrix defined by @math , @math be a given function, and @math be the vector of @math equally spaced samples from @math on @math . In this case Theorem tells us that collecting the @math function samples determined by @math will be sufficient to accurately approximate the discrete Fourier transform of @math with high probability. More precisely, if @math is input to a recovery algorithm known as CoSaMP @cite_12 the following theorem holds.
|
{
"cite_N": [
"@cite_12"
],
"mid": [
"2289917018"
],
"abstract": [
"Abstract Compressive sampling offers a new paradigm for acquiring signals that are compressible with respect to an orthonormal basis. The major algorithmic challenge in compressive sampling is to approximate a compressible signal from noisy samples. This paper describes a new iterative recovery algorithm called CoSaMP that delivers the same guarantees as the best optimization-based approaches. Moreover, this algorithm offers rigorous bounds on computational cost and storage. It is likely to be extremely efficient for practical problems because it requires only matrix–vector multiplies with the sampling matrix. For compressible signals, the running time is just O ( N log 2 N ) , where N is the length of the signal."
]
}
|
1009.6215
|
1669198458
|
Segmentation is often an essential intermediate step in image analysis. A volume segmentation characterizes the underlying volume image in terms of geometric information--segments, faces between segments, curves in which several faces meet--as well as a topology on these objects. Existing algorithms encode this information in designated data structures, but require that these data structures fit entirely in Random Access Memory (RAM). Today, 3D images with several billion voxels are acquired, e.g. in structural neurobiology. Since these large volumes can no longer be processed with existing methods, we present a new algorithm which performs geometry and topology extraction with a runtime linear in the number of voxels and log-linear in the number of faces and curves. The parallelizable algorithm proceeds in a block-wise fashion and constructs a consistent representation of the entire volume image on the hard drive, making the structure of very large volume segmentations accessible to image analysis. The parallelized C++ source code, free command line tools and MATLAB mex files are avilable from this http URL
|
Data structures that store, for each component of a segmentation, all points on the topological grid that constitute this component were proposed and implemented by @cite_13 for image segmentations and envisioned by @cite_1 for volume image segmentations. However, a storage concept that is suitable for large volume segmentations has so far been missing.
|
{
"cite_N": [
"@cite_1",
"@cite_13"
],
"mid": [
"2073401105",
"1542086458"
],
"abstract": [
"In this paper, we define the three-dimensional topological map, a model which represents both the topological and geometrical information of a three-dimensional labeled image. Since this model describes the image's topology in a minimal way, we can use it to define efficient image processing algorithms. The topological map is the last level of map hierarchy. Each level represents the region boundaries of the image and is defined from the previous level in the hierarchy, thus giving a simple constructive definition. This model is an extension of the similar model defined for 2D images. Progressive definition based on successive map levels allows us to extend this model to higher dimension. Moreover, with progressive definition, we can study each level separately. This simplifies the study of disconnection cases and the proofs of topological map properties. Finally, we provide an incremental extraction algorithm which extracts any map of the hierarchy in a single image scan. Moreover, we show that this algorithm is very efficient by giving the results of our experiments made on artificial images.",
"We propose the GeoMap abstract data type as a unified representation for image segmentation purposes. It manages both topology (based on XPMaps) and pixel-based information, and its interface is carefully designed to support a variety of automatic and interactive segmentation methods. We have successfully used the abstract concept of a GeoMap as a foundation for the implementation of well-known segmentation methods."
]
}
|
1009.6215
|
1669198458
|
Segmentation is often an essential intermediate step in image analysis. A volume segmentation characterizes the underlying volume image in terms of geometric information--segments, faces between segments, curves in which several faces meet--as well as a topology on these objects. Existing algorithms encode this information in designated data structures, but require that these data structures fit entirely in Random Access Memory (RAM). Today, 3D images with several billion voxels are acquired, e.g. in structural neurobiology. Since these large volumes can no longer be processed with existing methods, we present a new algorithm which performs geometry and topology extraction with a runtime linear in the number of voxels and log-linear in the number of faces and curves. The parallelizable algorithm proceeds in a block-wise fashion and constructs a consistent representation of the entire volume image on the hard drive, making the structure of very large volume segmentations accessible to image analysis. The parallelized C++ source code, free command line tools and MATLAB mex files are avilable from this http URL
|
were introduced in image analysis in @cite_5 and are used as data structures, e.g. in @cite_13 as well as in some algorithms of the Computational Geometry Algorithms Library (CGAL) www.cgal.org . The extension of combinatorial maps to higher dimensions is involved but possible @cite_14 @cite_3 and has facilitated the development of the 3-dimensional topological map @cite_18 @cite_4 . This map captures not only the topology of a segmentation but also its embedding into the segmented space, i.e. containment relations and orders of objects @cite_14 @cite_1 . It is therefore more expensive to construct and manipulate than a data structure that encodes only the topology.
|
{
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_4",
"@cite_1",
"@cite_3",
"@cite_5",
"@cite_13"
],
"mid": [
"1832172469",
"2036509143",
"1991686011",
"2073401105",
"2048095461",
"1486514476",
"1542086458"
],
"abstract": [
"In this paper we define the 3d topological map and give an optimal algorithm which computes it from a segmented image. This data structure encodes totally all the information given by the segmentation. More, it allows to continue segmentation either algorithmically or interactively. We propose an original approach which uses several levels of maps. This allows us to propose a reasonable and implementable solution where other approaches don't allow suitable solutions. Moreover our solution has been implemented and the theoretical results translate very well in practical applications.",
"This paper deals with the modeling of n-dimensional objects, more precisely with the modeling of subdivisions of n-dimensional topological spaces. We here study the notions of: n-dimensional generalized map (or n-G-map), for the modeling of the topology of any subdivision of any n-dimensional topological space (orientable or not orientable, with or without boundaries); n-dimensional map (or n-map), for the modeling of the topology of any subdivision of any orientable n-dimensional topological space, without boundaries. These two notions extend the notion of topological map, which has been used for the modeling of the topology of any subdivision of any surface. We study in this paper some properties of the n-G-maps and the n-maps (orientability, duality, relationships between n-G-maps and n-maps …), and we define also operations for constructing any n-G-map.",
"Split-and-merge algorithms define a class of image segmentation methods. Topological maps are a mathematical model that represents image subdivisions in 2D and 3D. This paper discusses a split-and-merge method for 3D image data based on the topological map model This model allows representations of states of segmentations and of merge and split operations. Indeed, it can be used as data structure for dynamic changes of segmentation. The paper details such an algorithmic approach and analyzes its time complexity. A general introduction into combinatorial and topological maps is given to support the understanding of the proposed algorithms.",
"In this paper, we define the three-dimensional topological map, a model which represents both the topological and geometrical information of a three-dimensional labeled image. Since this model describes the image's topology in a minimal way, we can use it to define efficient image processing algorithms. The topological map is the last level of map hierarchy. Each level represents the region boundaries of the image and is defined from the previous level in the hierarchy, thus giving a simple constructive definition. This model is an extension of the similar model defined for 2D images. Progressive definition based on successive map levels allows us to extend this model to higher dimension. Moreover, with progressive definition, we can study each level separately. This simplifies the study of disconnection cases and the proofs of topological map properties. Finally, we provide an incremental extraction algorithm which extracts any map of the hierarchy in a single image scan. Moreover, we show that this algorithm is very efficient by giving the results of our experiments made on artificial images.",
"In boundary representation, a geometric object is represented by the union of a ‘topological’ model, which describes the topology of the modelled object, and an ‘embedding’ model, which describes the embedding of the object, for instance in three-dimensional Euclidean space. In recent years, numerous topological models have been developed for boundary representation, and there have been important developments with respect to dimension and orientability. In the main, two types of topological models can be distinguished. ‘Incidence graphs’ are graphs or hypergraphs, where the nodes generally represent the cells of the modelled subdivision (vertex, edge, face, etc.), and the edges represent the adjacency and incidence relations between these cells. ‘Ordered’ models use a single type of basic element (more or less explicitly defined), on which ‘element functions’ act; the cells of the modelled subdivision are implicitly defined in this type of model. In this paper some of the most representative ordered topological models are compared using the concepts of the n-dimensional generalized map and the n-dimensional map. The main result is that ordered topological models are (roughly speaking) equivalent with respect to the class of objects which can be modelled (i.e. with respect to dimension and orientability).",
"Abstract The interactive use of graphic paintboxes has raised a new concept in image synthesis that is sometimes called 2 1 2 D synthesis. It mainly consists of modeling scenes made of possibly overlapping 2D objects, ordered with respect to depth. There exist different approaches to realize such a model. In this paper, we briefly sketch a model based on a partition of the image into elementary regions and the representation of 2D objects by a set of contours. Then we describe precisely the algorithms and data structures in relation to the fundamental operation defined in this model: the insertion of a new contour in a previously defined contour set.",
"We propose the GeoMap abstract data type as a unified representation for image segmentation purposes. It manages both topology (based on XPMaps) and pixel-based information, and its interface is carefully designed to support a variety of automatic and interactive segmentation methods. We have successfully used the abstract concept of a GeoMap as a foundation for the implementation of well-known segmentation methods."
]
}
|
1009.6215
|
1669198458
|
Segmentation is often an essential intermediate step in image analysis. A volume segmentation characterizes the underlying volume image in terms of geometric information--segments, faces between segments, curves in which several faces meet--as well as a topology on these objects. Existing algorithms encode this information in designated data structures, but require that these data structures fit entirely in Random Access Memory (RAM). Today, 3D images with several billion voxels are acquired, e.g. in structural neurobiology. Since these large volumes can no longer be processed with existing methods, we present a new algorithm which performs geometry and topology extraction with a runtime linear in the number of voxels and log-linear in the number of faces and curves. The parallelizable algorithm proceeds in a block-wise fashion and constructs a consistent representation of the entire volume image on the hard drive, making the structure of very large volume segmentations accessible to image analysis. The parallelized C++ source code, free command line tools and MATLAB mex files are avilable from this http URL
|
The main focus of previous efforts to extract and encode the geometry and topology of segmentations has not been on large volume segmentations but on the efficient processing of the merging and splitting of segments. These operations are required within the context of inter-active segmentation. In @cite_0 @cite_1 , representations of the geometry and topology are constructed incrementally, using random access to already constructed parts of the data structure. In order for these algorithms to work efficiently, the underlying data structures need to be kept entirely in RAM. To extract the geometry and topology of a volume segmentation of @math voxels, @math bytes @math GB of RAM are required for the labeling of the topological grid, an amount that is not available on present day desktop computers. Beyond @math voxels, even the @math TB of RAM of a large server are insufficient. The method presented in this article overcomes this limitation by means of block-wise processing. It makes geometry and topology extraction from large volume segmentations possible.
|
{
"cite_N": [
"@cite_0",
"@cite_1"
],
"mid": [
"2135097097",
"2073401105"
],
"abstract": [
"Although many interactive segmentation methods exists,none can be considered a silver bullet for all clinical tasks. Moreover, incompatible data representations prevent multiple algorithms from being combined as desired. We propose the GEOMAP as a unified representation for segmentation results and illustrate how it facilitates the design of an integrated framework for interactive medical image analysis. Results show the high flexibility and performance of the new framework.",
"In this paper, we define the three-dimensional topological map, a model which represents both the topological and geometrical information of a three-dimensional labeled image. Since this model describes the image's topology in a minimal way, we can use it to define efficient image processing algorithms. The topological map is the last level of map hierarchy. Each level represents the region boundaries of the image and is defined from the previous level in the hierarchy, thus giving a simple constructive definition. This model is an extension of the similar model defined for 2D images. Progressive definition based on successive map levels allows us to extend this model to higher dimension. Moreover, with progressive definition, we can study each level separately. This simplifies the study of disconnection cases and the proofs of topological map properties. Finally, we provide an incremental extraction algorithm which extracts any map of the hierarchy in a single image scan. Moreover, we show that this algorithm is very efficient by giving the results of our experiments made on artificial images."
]
}
|
1009.5878
|
1549854681
|
This paper presents results of the performance benchmarks of the Open Source hypervisor Xen. The study focuses on the network related performance as well as on the application related performance of multiple virtual machines that were running on the same Xen hypervisor. The comparison was carried out using a self-developed benchmark suite that consists of easily available Open Source tools. The goal is to measure the performance of the hypervisor in typical real-world application scenarios when used for "mass virtual hosting", such as hosting solutions of so called virtual private servers for small-to-medium sized businesses environments. The results of the benchmarks show, that the tested Xen setup offers good performance with respect to network traffic stress tests, but only 75 of the performance of the non-virtualized reference environment. This application performance score decreases as more virtual machines are running simultaneously.
|
The measurement of the performance of hypervisors like Xen has been subject to many studies, including barham2003 , @cite_1 , deshane2008 , apparao2006 , cherkasova2005 , matthews2007 , xu2008 , @cite_11 .
|
{
"cite_N": [
"@cite_1",
"@cite_11"
],
"mid": [
"59946610",
"1994040764"
],
"abstract": [
"Xen is an x86 virtual machine monitor produced by the University of Cambridge Computer Laboratory and released under the GNU General Public License. Performance results comparing XenoLinux (Linux running in a Xen virtual machine) to native Linux as well as to other virtualization tools such as User Mode Linux (UML) were recently published in the paper \"Xen and the Art of Virtualization\" at the Symposium on Operating Systems Principles (October 2003). In this study, we repeat this performance analysis of Xen. We also extend the analysis in several ways, including comparing XenoLinux on x86 to an IBM zServer. We use this study as an example of repeated research. We argue that this model of research, which is enabled by open source software, is an important step in transferring the results of computer science research into production environments.",
"Server virtualization is now required for data center systems to reduce the number of servers. However, it is still unclear which business applications are suitable for virtualization. We present our evaluation results for four types of business application benchmarks on our virtualization system. The results show that the virtualization performance of a TPC-H workload, which mainly executes referencing on a database, uses about 90 of the non-virtualized performance, and that the virtualization performance of the TPC-H workload is better than that of the other benchmark applications. The results of a new performance characteristic for virtualization indicated that application programs, which have performance bottlenecks in disk I Os and low CPU utilizations in a non-virtualized environment, are suitable for virtualization."
]
}
|
1009.5268
|
1889103747
|
Support Vector Machines (SVMs) are popular tools for data mining tasks such as classification, regression, and density estimation. However, original SVM (C-SVM) only considers local information of data points on or over the margin. Therefore, C-SVM loses robustness. To solve this problem, one approach is to translate (i.e., to move without rotation or change of shape) the hyperplane according to the distribution of the entire data. But existing work can only be applied for 1-D case. In this paper, we propose a simple and efficient method called General Scaled SVM (GS-SVM) to extend the existing approach to multi-dimensional case. Our method translates the hyperplane according to the distribution of data projected on the normal vector of the hyperplane. Compared with C-SVM, GS-SVM has better performance on several data sets.
|
There have been many works which aim at combining the global information into C-SVM. Huang proposed a new large margin classifier called Maxi-Min Margin Machine ( @math ) which use the covariance information of two classes @cite_9 . Yeung first used clustering algorithms to determine the structure of data, then incorporated this structural information into constraints to calculate the largest margin @cite_6 . In contrast to integrating global information into constraints, Xue @cite_2 proposed Structural Support Vector Machine, which embeds global information into the C-SVM's objective function. This approach greatly reduces the computational complexity while keeping the sparsity merit of C-SVM. Xiong and Cherkassky proposed SVM LDA which combined LDA and SVM together @cite_3 . The SVM part reflects the local information of the data while the LDA part reflects the global information. Takuya and Shigeo improved the generalization ability of C-SVM by optimizing the bias term based on Bayesian theory @cite_7 .
|
{
"cite_N": [
"@cite_7",
"@cite_9",
"@cite_3",
"@cite_6",
"@cite_2"
],
"mid": [
"2333550441",
"2165155277",
"1485231155",
"2000762656",
"1566973065"
],
"abstract": [
"In this paper, to resolve unclassifiable regions in the support vector machines, we propose fuzzy support vector machines (FSVMs). Using the decision function obtained by training the SVM, for each class, we define a truncated polyhedral pyramidal membership function. Since, for the data in the classifiable regions, the classification results are the same, the generalization ability of the FSVM is the same as or better than that of the SVM. To further improve the generalization ability, we introduce the Bayes theory, assuming that the class distributions are normal, to optimize the bias term of the optimal hyperplane. We evaluate our methods for four benchmark data sets and demonstrate the superiority of the FSVM and Bayes FSVM over the SVM.",
"A new large margin classifier, named Maxi-Min Margin Machine (M4) is proposed in this paper. This new classifier is constructed based on both a \"local: and a \"global\" view of data, while the most popular large margin classifier, Support Vector Machine (SVM) and the recently-proposed important model, Minimax Probability Machine (MPM) consider data only either locally or globally. This new model is theoretically important in the sense that SVM and MPM can both be considered as its special case. Furthermore, the optimization of M4 can be cast as a sequential conic programming problem, which can be solved efficiently. We describe the M4 model definition, provide a clear geometrical interpretation, present theoretical justifications, propose efficient solving methods, and perform a series of evaluations on both synthetic data sets and real world benchmark data sets. Its comparison with SVM and MPM also demonstrates the advantages of our new model.",
"This paper describes a new large margin classifier, named SVM LDA. This classifier can be viewed as an extension of support vector machine (SVM) by incorporating some global information about the data. The SVM LDA classifier can be also seen as a generalization of linear discriminant analysis (LDA) by incorporating the idea of (local) margin maximization into standard LDA formulation. We show that existing SVM software can be used to solve the SVM LDA formulation. We also present empirical comparisons of the proposed algorithm with SVM and LDA using both synthetic and real world benchmark data.",
"This paper proposes a new large margin classifier--the structured large margin machine (SLMM)--that is sensitive to the structure of the data distribution. The SLMM approach incorporates the merits of \"structured\" learning models, such as radial basis function networks and Gaussian mixture models, with the advantages of \"unstructured\" large margin learning schemes, such as support vector machines and maxi-min margin machines. We derive the SLMM model from the concepts of \"structured degree\" and \"homospace\", based on an analysis of existing structured and unstructured learning models. Then, by using Ward's agglomerative hierarchical clustering on input data (or data mappings in the kernel space) to extract the underlying data structure, we formulate SLMM training as a sequential second order cone programming. Many promising features of the SLMM approach are illustrated, including its accuracy, scalability, extensibility, and noise tolerance. We also demonstrate the theoretical importance of the SLMM model by showing that it generalizes existing approaches, such as SVMs and M4s, provides novel insight into learning models, and lays a foundation for conceiving other \"structured\" classifiers.",
"Support Vector Machine (SVM) is one of the most popular classifiers in pattern recognition, which aims to find a hyperplane that can separate two classes of samples with the maximal margin. As a result, traditional SVM usually more focuses on the scatter between classes, but neglects the different data distributions within classes which are also vital for an optimal classifier in different real-world problems. Recently, using as much structure information hidden in a given dataset as possible to help improve the generalization ability of a classifier has yielded a class of effective large margin classifiers, typically as Structured Large Margin Machine (SLMM). SLMM is generally derived by optimizing a corresponding objective function using SOCP, and thus in contrast to SVM developed from optimizing a QP problem, it, though more effective in classification performance, has the following shortcomings: 1) large time complexity; 2) lack of sparsity of solution, and 3) poor scalability to the size of the dataset. In this paper, still following the above line of the research, we develop a novel algorithm, termed as Structural Support Vector Machine (SSVM), by directly embedding the structural information into the SVM objective function rather than using as the constraints into SLMM, in this way, we achieve: 1) to overcome the above three shortcomings; 2) empirically better than or comparable generalization to SLMM, and 3) theoretically and empirically better generalization than SVM."
]
}
|
1009.4773
|
1657265342
|
This paper introduces a random multiple access method for satellite communications, named Network Coding-based Slotted Aloha (NCSA). The goal is to improve diversity of data bursts on a slotted-ALOHA-like channel thanks to error correcting codes and Physical-layer Network Coding (PNC). This scheme can be considered as a generalization of the Contention Resolution Diversity Slotted Aloha (CRDSA) where the different replicas of this system are replaced by the different parts of a single word of an error correcting code. The performance of this scheme is first studied through a density evolution approach. Then, simulations confirm the CRDSA results by showing that, for a time frame of @math slots, the achievable total throughput is greater than @math , where @math is the maximal throughput achieved by a centralized scheme. This paper is a first analysis of the proposed scheme which open several perspectives. The most promising approach is to integrate collided bursts into the decoding process in order to improve the obtained performance.
|
We have modeled the decoding process by using density evolution methods, classically applied in the context of LDPC decoding @cite_7 . Indeed, the data recovery process can be considered as a message-passing algorithm where some messages are exchanged between the user nodes and the slot nodes. The theoretical results are validated by simulations. We show (under our hypothesis) that this system can reach a throughput greater than @math .
|
{
"cite_N": [
"@cite_7"
],
"mid": [
"2140817291"
],
"abstract": [
"LDPC codes are one of the hottest topics in coding theory today. Originally invented in the early 1960’s, they have experienced an amazing comeback in the last few years. Unlike many other classes of codes, LDPC codes are already equipped with very fast (probabilistic) encoding and decoding algorithms. The question is that of the design of the codes such that these algorithms can recover the original codeword in the face of large amounts of noise. New analytic and combinatorial tools make it possible to solve the design problem. This makes LDPC codes not only attractive from a theoretical point of view, but also perfect for practical applications. In this note I will give a brief overview of the origins of LDPC codes and the methods used for their analysis and design."
]
}
|
1009.4823
|
2139482324
|
We propose a mid-level image segmentation framework that combines multiple figure-ground hypothesis (FG) constrained at different locations and scales, into interpretations that tile the entire image. The problem is cast as optimization over sets of maximal cliques sampled from the graph connecting non-overlapping, putative figure-ground segment hypotheses. Potential functions over cliques combine unary Gestalt-based figure quality scores and pairwise compatibilities among spatially neighboring segments, constrained by T-junctions and the boundary interface statistics resulting from projections of real 3d scenes. Learning the model parameters is formulated as rank optimization, alternating between sampling image tilings and optimizing their potential function parameters. State of the art results are reported on both the Berkeley and the VOC2009 segmentation dataset, where a 28 improvement was achieved.
|
Approaches to image segmentation include normalized cuts @cite_2 , mean shift @cite_6 and minimum spanning trees @cite_8 . They are usually computed multiple times, to increase the probability that some of the retrieved segments capture full objects, or their significant parts in images. Another methodology to obtain multiple segmentations is to aggregate in a hierarchy, two well-known examples being multigrid methods @cite_19 and the Ultrametric Contour Maps @cite_9 . The latter achieved state-of-the-art results in a number of challenging segmentation datasets. These algorithms partition the image into a number of regions by using pairwise pixel dependencies. Direct learning is usually targeted at finding the parameters of local affinities @cite_9 @cite_12 . Other techniques work at coarser scales by optimizing over superpixels. This allow features to be computed over a larger spatial support. Ren and Malik @cite_1 learn a classification model to combine superpixels based on their Gestalt properties. @cite_16 proposed a model that reasons jointly over scene geometry and occlusion boundaries, progressively merging superpixels so as to maximize the likelihood of a qualitative 3d scene interpretation. Instead our goal is complementary: a set of consistent full image segmentation hypotheses, computed based on mid-level Gestalt cues and implicit 3d constraints.
|
{
"cite_N": [
"@cite_8",
"@cite_9",
"@cite_1",
"@cite_6",
"@cite_19",
"@cite_2",
"@cite_16",
"@cite_12"
],
"mid": [
"1999478155",
"2116046277",
"2104125540",
"2067191022",
"2058871925",
"2121947440",
"2125310925",
"1517004310"
],
"abstract": [
"This paper addresses the problem of segmenting an image into regions. We define a predicate for measuring the evidence for a boundary between two regions using a graph-based representation of the image. We then develop an efficient segmentation algorithm based on this predicate, and show that although this algorithm makes greedy decisions it produces segmentations that satisfy global properties. We apply the algorithm to image segmentation using two different kinds of local neighborhoods in constructing the graph, and illustrate the results with both real and synthetic images. The algorithm runs in time nearly linear in the number of graph edges and is also fast in practice. An important characteristic of the method is its ability to preserve detail in low-variability image regions while ignoring detail in high-variability regions.",
"We propose a generic grouping algorithm that constructs a hierarchy of regions from the output of any contour detector. Our method consists of two steps, an oriented watershed transform (OWT) to form initial regions from contours, followed by construction of an ultra-metric contour map (UCM) defining a hierarchical segmentation. We provide extensive experimental evaluation to demonstrate that, when coupled to a high-performance contour detector, the OWT-UCM algorithm produces state-of-the-art image segmentations. These hierarchical segmentations can optionally be further refined by user-specified annotations.",
"We propose a two-class classification model for grouping. Human segmented natural images are used as positive examples. Negative examples of grouping are constructed by randomly matching human segmentations and images. In a preprocessing stage an image is over-segmented into super-pixels. We define a variety of features derived from the classical Gestalt cues, including contour, texture, brightness and good continuation. Information-theoretic analysis is applied to evaluate the power of these grouping cues. We train a linear classifier to combine these features. To demonstrate the power of the classification model, a simple algorithm is used to randomly search for good segmentations. Results are shown on a wide range of images.",
"A general non-parametric technique is proposed for the analysis of a complex multimodal feature space and to delineate arbitrarily shaped clusters in it. The basic computational module of the technique is an old pattern recognition procedure: the mean shift. For discrete data, we prove the convergence of a recursive mean shift procedure to the nearest stationary point of the underlying density function and, thus, its utility in detecting the modes of the density. The relation of the mean shift procedure to the Nadaraya-Watson estimator from kernel regression and the robust M-estimators; of location is also established. Algorithms for two low-level vision tasks discontinuity-preserving smoothing and image segmentation - are described as applications. In these algorithms, the only user-set parameter is the resolution of the analysis, and either gray-level or color images are accepted as input. Extensive experimental results illustrate their excellent performance.",
"Humans usually can effortlessly find coherent regions even in noisy visual images, a task that is crucial for object recognition. Computer algorithms have been less successful at doing this in natural viewing conditions, in part because early work on the problem used only local computations on the image. Now a new approach has been developed, based on an image segmentation strategy that analyses all salient regions of an image and builds them into a hierarchical structure. This method is faster and more accurate than previous approaches, but the resulting algorithm is relatively simple to use. It is demonstrated in action by using it to find items within a large database of objects that match a target item.",
"We propose a novel approach for solving the perceptual grouping problem in vision. Rather than focusing on local features and their consistencies in the image data, our approach aims at extracting the global impression of an image. We treat image segmentation as a graph partitioning problem and propose a novel global criterion, the normalized cut, for segmenting the graph. The normalized cut criterion measures both the total dissimilarity between the different groups as well as the total similarity within the groups. We show that an efficient computational technique based on a generalized eigenvalue problem can be used to optimize this criterion. We applied this approach to segmenting static images, as well as motion sequences, and found the results to be very encouraging.",
"Humans have an amazing ability to instantly grasp the overall 3D structure of a scene--ground orientation, relative positions of major landmarks, etc.--even from a single image. This ability is completely missing in most popular recognition algorithms, which pretend that the world is flat and or view it through a patch-sized peephole. Yet it seems very likely that having a grasp of this \"surface layout\" of a scene should be of great assistance for many tasks, including recognition, navigation, and novel view synthesis. In this paper, we take the first step towards constructing the surface layout, a labeling of the image intogeometric classes. Our main insight is to learn appearance-based models of these geometric classes, which coarsely describe the 3D scene orientation of each image region. Our multiple segmentation framework provides robust spatial support, allowing a wide variety of cues (e.g., color, texture, and perspective) to contribute to the confidence in each geometric label. In experiments on a large set of outdoor images, we evaluate the impact of the individual cues and design choices in our algorithm. We further demonstrate the applicability of our method to indoor images, describe potential applications, and discuss extensions to a more complete notion of surface layout.",
"We present a general graph learning algorithm for spectral graph partitioning, that allows direct supervised learning of graph structures using hand labeled training examples. The learning algorithm is based on gradient descent in the space of all feasible graph weights. Computation of the gradient involves finding the derivatives of eigenvectors with respect to the graph weight matrix. We show the derivatives of eigenvectors exist and can be computed in an exact analytical form using the theory of implicit functions. Furthermore, we show for a simple case, the gradient converges exponentially fast. In the image segmentation domain, we demonstrate how to encode top-down high level object prior in a bottom-up shape detection process."
]
}
|
1009.4823
|
2139482324
|
We propose a mid-level image segmentation framework that combines multiple figure-ground hypothesis (FG) constrained at different locations and scales, into interpretations that tile the entire image. The problem is cast as optimization over sets of maximal cliques sampled from the graph connecting non-overlapping, putative figure-ground segment hypotheses. Potential functions over cliques combine unary Gestalt-based figure quality scores and pairwise compatibilities among spatially neighboring segments, constrained by T-junctions and the boundary interface statistics resulting from projections of real 3d scenes. Learning the model parameters is formulated as rank optimization, alternating between sampling image tilings and optimizing their potential function parameters. State of the art results are reported on both the Berkeley and the VOC2009 segmentation dataset, where a 28 improvement was achieved.
|
While multi-part image segmentation algorithms are most commonly used, a number of figure-ground methods have been recently pursued. @cite_20 proposed an algorithm that generates figure-ground segmentations by maximizing a self-similarity criterion around a user selected image point. Malisiewicz and Efros @cite_4 showed that good object-level segments could be obtained by merging pairs and triplets of segments from multi-part segmentations, but at the expense of generating also a large quantity of implausible ones. Carreira and Sminchisescu @cite_17 generate a compact set of segments using parametric minimum cuts and learn to score them using region and Gestalt-based features. These algorithms were shown to be quite successful in extracting full object segments, suggesting that a promising research direction is to develop methods that combine multiple figure-ground segmentations (or just segments obtained at multiple scales, potentially from different methods), into plausible full image segmentations. Still missing is a formal multiple hypothesis computational framework for consistent selection (tiling) and learning, which we pursue here. Providing a compact set of multiple hypotheses rather than a single answer is desirable for learning, for high-level, informed processing and for graceful performance degradation.
|
{
"cite_N": [
"@cite_4",
"@cite_20",
"@cite_17"
],
"mid": [
"2009685382",
"2146531254",
"2017691720"
],
"abstract": [
"Sliding window scanning is the dominant paradigm in object recognition research today. But while much success has been reported in detecting several rectangular-shaped object classes (i.e. faces, cars, pedestrians), results have been much less impressive for more general types of objects. Several researchers have advocated the use of image segmentation as a way to get a better spatial support for objects. In this paper, our aim is to address this issue by studying the following two questions: 1) how important is good spatial support for recognition? 2) can segmentation provide better spatial support for objects? To answer the first, we compare recognition performance using ground-truth segmentation vs. bounding boxes. To answer the second, we use the multiple segmentation approach to evaluate how close can real segments approach the ground-truth for real objects, and at what cost. Our results demonstrate the importance of finding the right spatial support for objects, and the feasibility of doing so without excessive computational burden.",
"There is a huge diversity of definitions of \"visually meaningful\" image segments, ranging from simple uniformly colored segments, textured segments, through symmetric patterns, and up to complex semantically meaningful objects. This diversity has led to a wide range of different approaches for image segmentation. In this paper we present a single unified framework for addressing this problem --- \"Segmentation by Composition\". We define a good image segment as one which can be easily composed using its own pieces, but is difficult to compose using pieces from other parts of the image. This non-parametric approach captures a large diversity of segment types, yet requires no pre-definition or modelling of segment types, nor prior training. Based on this definition, we develop a segment extraction algorithm --- i.e., given a single point-of-interest, provide the \"best\" image segment containing that point. This induces a figure-ground image segmentation, which applies to a range of different segmentation tasks: single image segmentation, simultaneous co-segmentation of several images, and class-based segmentations.",
"We present a novel framework for generating and ranking plausible objects hypotheses in an image using bottom-up processes and mid-level cues. The object hypotheses are represented as figure-ground segmentations, and are extracted automatically, without prior knowledge about properties of individual object classes, by solving a sequence of constrained parametric min-cut problems (CPMC) on a regular image grid. We then learn to rank the object hypotheses by training a continuous model to predict how plausible the segments are, given their mid-level region properties. We show that this algorithm significantly outperforms the state of the art for low-level segmentation in the VOC09 segmentation dataset. It achieves the same average best segmentation covering as the best performing technique to date [2], 0.61 when using just the top 7 ranked segments, instead of the full hierarchy in [2]. Our method achieves 0.78 average best covering using 154 segments. In a companion paper [18], we also show that the algorithm achieves state-of-the art results when used in a segmentation-based recognition pipeline."
]
}
|
1009.4954
|
2950016243
|
In this paper, we propose a cross-layer scheduling algorithm that achieves a throughput "epsilon-close" to the optimal throughput in multi-hop wireless networks with a tradeoff of O(1 epsilon) in delay guarantees. The algorithm aims to solve a joint congestion control, routing, and scheduling problem in a multi-hop wireless network while satisfying per-flow average end-to-end delay guarantees and minimum data rate requirements. This problem has been solved for both backlogged as well as arbitrary arrival rate systems. Moreover, we discuss the design of a class of low-complexity suboptimal algorithms, the effects of delayed feedback on the optimal algorithm, and the extensions of the proposed algorithm to different interference models with arbitrary link capacities.
|
Delay issues in single-hop wireless networks have been addressed in @cite_6 - @cite_1 . Especially, the scheduling algorithm in @cite_11 provides a throughput-utility that is inversely proportional to the delay guarantee. Authors of @cite_4 have obtained delay bounds for two classes of scheduling policies. A random access algorithm is proposed in @cite_28 for lattice and torus interference graphs, which is shown to achieve order-optimal delay in a distributed manner with optimal throughput. But these works are not readily extendable to multi-hop wireless networks, where additional arrivals from neighboring nodes and routing must be considered. Delay analysis for multi-hop networks with fixed-routing is provided in @cite_30 . Delay-related scheduling in multi-hop wireless networks have been proposed in @cite_0 @cite_32 @cite_8 @cite_16 @cite_23 . However, none of the above-mentioned works provide explicit end-to-end delay guarantees.
|
{
"cite_N": [
"@cite_30",
"@cite_4",
"@cite_8",
"@cite_28",
"@cite_1",
"@cite_32",
"@cite_6",
"@cite_0",
"@cite_23",
"@cite_16",
"@cite_11"
],
"mid": [
"2166280723",
"2144862578",
"2949108255",
"2949886575",
"2171028981",
"2170136598",
"",
"2082701892",
"2076426337",
"",
""
],
"abstract": [
"We analyze the delay performance of a multi-hop wireless network with a fixed route between each source-destination pair. There are arbitrary interference constraints on the set of links that can be served simultaneously at any given time. These interference constraints impose a fundamental lower bound on the delay performance of any scheduling policy for the system. We present a methodology to derive such lower bounds. For the tandem queue network, where the delay optimal policy is known, the expected delay of the optimal policy numerically coincides with the lower bound. We conduct extensive numerical studies to suggest that the average delay of the back-pressure scheduling policy can be made close to the lower bound by using appropriate functions of queue length.",
"We consider the question of obtaining tight delay guarantees for throughout-optimal link scheduling in arbitrary topology wireless ad-hoc networks. We consider two classes of scheduling policies: 1) a maximum queue-length weighted independent set scheduling policy, and 2) a randomized independent set scheduling policy where the independent set scheduling probabilities are selected optimally. Both policies stabilize all queues for any set of feasible packet arrival rates, and are therefore throughput-optimal. For these policies and i.i.d. packet arrivals, we show that the average packet delay is bounded by a constant that depends on the chromatic number of the interference graph, and the overall load on the network. We also prove that this upper bound is asymptotically tight in the sense that there exist classes of topologies where the expected delay attained by any scheduling policy is lower bounded by the same constant. Through simulations we examine the scaling of the average packet delay with respect to the overall load on the network, and the chromatic number of the link interference graph.",
"The back-pressure algorithm is a well-known throughput-optimal algorithm. However, its delay performance may be quite poor even when the traffic load is not close to network capacity due to the following two reasons. First, each node has to maintain a separate queue for each commodity in the network, and only one queue is served at a time. Second, the back-pressure routing algorithm may route some packets along very long routes. In this paper, we present solutions to address both of the above issues, and hence, improve the delay performance of the back-pressure algorithm. One of the suggested solutions also decreases the complexity of the queueing data structures to be maintained at each node.",
"In this paper, we consider CSMA policies for scheduling of multihop wireless networks with one-hop traffic. The main contribution of this paper is to propose Unlocking CSMA (U-CSMA) policy that enables to obtain high throughput with low (average) packet delay for large wireless networks. In particular, the delay under U-CSMA policy becomes order-optimal. For one-hop traffic, delay is defined to be order-optimal if it is O(1), i.e., it stays bounded, as the network-size increases to infinity. Using mean field theory techniques, we analytically show that for torus (grid-like) interference topologies with one-hop traffic, to achieve a network load of @math , the delay under U-CSMA policy becomes @math as the network-size increases, and hence, delay becomes order optimal. We conduct simulations for general random geometric interference topologies under U-CSMA policy combined with congestion control to maximize a network-wide utility. These simulations confirm that order optimality holds, and that we can use U-CSMA policy jointly with congestion control to operate close to the optimal utility with a low packet delay in arbitrarily large random geometric topologies. To the best of our knowledge, it is for the first time that a simple distributed scheduling policy is proposed that in addition to throughput utility-optimality exhibits delay order-optimality.",
"We investigate the problem of designing delay-aware joint flow control, routing, and scheduling algorithms in general multi-hop networks for maximizing network utilization. Since the end-to-end delay performance has a complex dependence on the high-order statistics of cross-layer algorithms, earlier optimization-based design methodologies that optimize the long term network utilization are not immediately well-suited for delay-aware design. This motivates us in this work to develop a novel design framework and alternative methods that take advantage of several unexploited design choices in the routing and the scheduling strategy spaces. In particular, we reveal and exploit a crucial characteristic of back pressure-type controllers that enables us to develop a novel link rate allocation strategy that not only optimizes long-term network utilization, but also yields loop free multi-path routes between each source-destination pair. Moreover, we propose a regulated scheduling strategy, based on a token-based service discipline, for shaping the per-hop delay distribution to obtain highly desirable end-to-end delay performance. We establish that our joint flow control, routing, and scheduling algorithm achieves loop-free routes and optimal network utilization. Our extensive numerical studies support our theoretical results, and further show that our joint design leads to substantial end-to-end delay performance improvements in multi-hop networks compared to earlier solutions.",
"We study a mobile wireless network where groups or clusters of nodes are intermittently connected via mobile carriers'' (the carriers provide connectivity over time among different clusters of nodes). Over such networks (an instantiation of a delay tolerant network), it is well-known that traditional routing algorithms perform very poorly. In this paper, we propose a two-level Back- Pressure with Source-Routing algorithm (BP+SR) for such networks. The proposed BP+SR algorithm separates routing and scheduling within clusters (fast time-scale) from the communications that occur across clusters (slow time-scale), without loss in network throughput (i.e., BP+SR is throughput-optimal). More importantly, for a source and destination node that lie in different clusters, the traditional back-pressure algorithm results in large queue lengths at each node along its path. This is because the queue dynamics are driven by the slowest time-scale (i.e., that of the carrier nodes) along the path between the source and destination, which results in very large end-to-end delays. On the other-hand, we show that the two-level BP+SR algorithm maintains large queues only at a very few nodes, and thus results in order-wise smaller end-to-end delays. We provide analytical as well as simulation results to confirm our claims.",
"",
"While there has been much progress in designing backpressure based stabilizing algorithms for multihop wireless networks, end-to-end performance (e.g., end-to-end buffer usage) results have not been as forthcoming. In this paper, we study the end-to-end buffer usage (sum of buffer utilization along a flow path) over a network with general topology and with fixed, loop-free routes using a large-deviations approach. We first derive bounds on the best performance that any scheduling algorithm can achieve. Based on the intuition from the bounds, we propose a class of (backpressure-like) scheduling algorithms called αβ-algorithms. We show that the parameters α and β can be chosen such that the system under the αβ-algorithm performs arbitrarily closely to the best possible scheduler (formally the decay rate function for end-to-end buffer overflow is shown to be arbitrarily close to optimal in the large-buffer regime). We also develop variants which have the same asymptotic optimality property, and also provide good performance in the small-buffer regime. Our results are substantiated using both analysis and simulation.",
"In a wireless network, a sophisticated algorithm is required to schedule simultaneous wireless transmissions while satisfying interference constraint that two neighboring nodes can not transmit simultaneously. The scheduling algorithm need to be excellent in performance while being simple and distributed so as to be implementable. The result of Tassiulas and Ephremides (1992) imply that the algorithm, scheduling transmissions of nodes in the 'maximum weight independent set' (MWIS) of network graph, is throughput optimal. However, algorithmically the problem of finding MWIS is known to be NP-hard and hard to approximate. This raises the following questions: is it even possible to obtain throughput optimal simple, distributed scheduling algorithm? if yes, is it possible to minimize delay of such an algorithm? Motivated by these questions, we first provide a distributed throughput optimal algorithm for any network topology. However, this algorithm may induce exponentially large delay. To overcome this, we present an order optimal delay algorithm for any non-expanding network topology. Networks deployed in geographic area, like wireless networks, are likely to be of this type. Our algorithm is based on a novel distributed graph partitioning scheme which may be of interest in its own right. Our algorithm for non-expanding graph takes O (n) total message exchanges or O(l) message exchanges per node to compute a schedule.",
"",
""
]
}
|
1009.4386
|
1586023924
|
By combining the features of CSMA and TDMA, fully decentralised WLAN MAC schemes have recently been proposed that converge to collision-free schedules. In this paper we describe a MAC with optimal long-run throughput that is almost decentralised. We then design two schemes that are practically realisable, decentralised approximations of this optimal scheme and operate with different amounts of sensing information. We achieve this by (1) introducing learning algorithms that can substantially speed up convergence to collision free operation; (2) developing a decentralised schedule length adaptation scheme that provides long-run fair (uniform) access to the medium while maintaining collision-free access for arbitrary numbers of stations.
|
Z-MAC @cite_4 is a hybrid protocol that combines TDMA with CSMA in wireless multi-hop networks . Z-MAC assigns each station a slot, but other stations can borrow the slot, with contention, if its owner has no data to send; the collision-free MAC proposed in @cite_7 has less communication complexity. Both of these MACs experience the same drawback that extra information exchange beacons are required. These introduce additional system complexity, including neighbour discovery, local frame exchange and global time synchronisation.
|
{
"cite_N": [
"@cite_4",
"@cite_7"
],
"mid": [
"2112030268",
"1480580613"
],
"abstract": [
"This paper presents the design, implementation and performance evaluation of a hybrid MAC protocol, called Z-MAC, for wireless sensor networks that combines the strengths of TDMA and CSMA while offsetting their weaknesses. Like CSMA, Z-MAC achieves high channel utilization and low latency under low contention and like TDMA, achieves high channel utilization under high contention and reduces collision among two-hop neighbors at a low cost. A distinctive feature of Z-MAC is that its performance is robust to synchronization errors, slot assignment failures, and time-varying channel conditions; in the worst case, its performance always falls back to that of CSMA. Z-MAC is implemented in TinyOS.",
"A MAC protocol specifies how nodes in a sensor network access a shared communication channel. Desired properties of such MAC protocol are: it should be distributed and contention-free (avoid collisions); it should self-stabilize to changes in the network (such as arrival of new nodes), and these changes should be contained, i.e., affect only the nodes in the vicinity of the change; it should not assume that nodes have a global time reference, i.e., nodes may not be time-synchronized. We give the first MAC protocols that satisfy all of these requirements, i.e., we give distributed, contention-free, self-stabilizing MAC protocols which do not assume a global time reference. Our protocols self-stabilize from an arbitrary initial state, and if the network changes the changes are contained and the protocol adjusts to the local topology of the network. The communication complexity, number and size of messages, for the protocol to stabilize is small (logarithmic in network size)."
]
}
|
1009.4386
|
1586023924
|
By combining the features of CSMA and TDMA, fully decentralised WLAN MAC schemes have recently been proposed that converge to collision-free schedules. In this paper we describe a MAC with optimal long-run throughput that is almost decentralised. We then design two schemes that are practically realisable, decentralised approximations of this optimal scheme and operate with different amounts of sensing information. We achieve this by (1) introducing learning algorithms that can substantially speed up convergence to collision free operation; (2) developing a decentralised schedule length adaptation scheme that provides long-run fair (uniform) access to the medium while maintaining collision-free access for arbitrary numbers of stations.
|
A collision-free MAC is introduced in @cite_10 for wireless mesh backbones. It guarantees priority access for real-time traffic, but it is restricted to a fixed wireless network and requires extra control overhead for every transmission. Ordered CSMA @cite_16 uses a centralised controller to allocate packet transmission slots. It ensures that each station transmits immediately after the data frame transmission of the previous station. It has the drawback of requiring a centralised controller with its associated coordination overhead.
|
{
"cite_N": [
"@cite_16",
"@cite_10"
],
"mid": [
"2107116407",
"2068118042"
],
"abstract": [
"Since underwater acoustic (UWA) networks have the nature of long propagation delay, low bit rates and error-prone acoustic communication, protocols designed for underwater acoustic networks are significantly different from that of terrestrial radio networks. Limited by these nature of UWA channels, conventional medium access control (MAC) protocols of radio packet network ether have low efficiency or are not able to apply to underwater acoustic networks. It is necessary to develop an efficient MAC protocol for underwater acoustic networks. In this paper, a collision-free MAC protocol for UWA networks called Ordered Carrier Sense Multiple Access (Ordered CSMA) is proposed and analyzed. Ordered CSMA combines the concepts of round-robin scheduling and CSMA. In Ordered CSMA, each station transmits data frame in a fixed order. More specifically, each station transmits immediately after the data frame transmission of last station in the order, instead of waiting for a period of maximum propagation delay. To achieve this, each station is constantly sensing the carrier and listens to all received frames. Due to the characteristics of collision free and high channel utilization, Ordered CSMA shows a great MAC efficiency improvement in our simulations, compared to previous works.",
"In this paper, a novel collision-free MAC scheme supporting multimedia applications is proposed for wireless mesh backbone. The proposed scheme is distributed, simple, and scalable. Benefiting from the fixed locations of wireless routers, the proposed MAC scheme reduces the control overhead greatly as compared with the conventional contention-based MAC protocols (e.g., IEEE 802.11). In addition, the proposed scheme can provide guaranteed priority access to real-time traffic and, at the same time, ensure fair channel access from the routers with data traffic. Unlike most of the existing works which focus on single-hop transmissions, the proposed MAC scheme takes the intra-flow correlations between up-stream and down-stream hops of a multi-hop flow into consideration. To avoid buffer overflow at bottleneck routers, a simple but effective congestion control mechanism is proposed. Simulation results demonstrate that the proposed scheme significantly improves the delay performance of real-time traffic, the fairness of data traffic, and the end-to-end data throughput, as compared with IEEE 802.11."
]
}
|
1009.4386
|
1586023924
|
By combining the features of CSMA and TDMA, fully decentralised WLAN MAC schemes have recently been proposed that converge to collision-free schedules. In this paper we describe a MAC with optimal long-run throughput that is almost decentralised. We then design two schemes that are practically realisable, decentralised approximations of this optimal scheme and operate with different amounts of sensing information. We achieve this by (1) introducing learning algorithms that can substantially speed up convergence to collision free operation; (2) developing a decentralised schedule length adaptation scheme that provides long-run fair (uniform) access to the medium while maintaining collision-free access for arbitrary numbers of stations.
|
Recently, @cite_3 proposed Learning-BEB, based on a modification of the conventional 802.11 DCF. In a decentralised fashion, it ultimately achieves collision-free TDMA-like operation for all stations The basic principle of its operation is that similarly to the 802.11 DCF, stations use a backoff counter and transmit after observing that number of idle slots. However, in Learning-BEB all stations choose a fixed, rather than random, value for the backoff counter after a successful transmission. After a colliding transmission, they choose the backoff counter uniformly at random, as in the DCF. We can think of this as each station randomly choosing a slot in a schedule, until they all choose a distinct slot. Arriving at this collision-free schedule can take a substantial period of time. In particular, when the number of slots in a schedule is close to the number of stations, it will take an extremely long time to converge to collision-free scenario. The authors of @cite_19 propose a scheme, SRB, that is similar in spirit to L-BEB.
|
{
"cite_N": [
"@cite_19",
"@cite_3"
],
"mid": [
"2143747785",
"90489852"
],
"abstract": [
"This paper proposes a semi-random backoff (SRB) method that enables resource reservation in contention-based wireless LANs. The proposed SRB is fundamentally different from traditional random backoff methods because it provides an easy migration path from random backoffs to deterministic slot assignments. The central idea of the SRB is for the wireless station to set its backoff counter to a deterministic value upon a successful packet transmission. This deterministic value will allow the station to reuse the time-slot in consecutive backoff cycles. When multiple stations with successful packet transmissions reuse their respective time-slots, the collision probability is reduced, and the channel achieves the equivalence of resource reservation. In case of a failed packet transmission, a station will revert to the standard random backoff method and probe for a new available time-slot. The proposed SRB method can be readily applied to both 802.11 DCF and 802.11e EDCA networks with minimum modification to the existing DCF EDCA implementations. Theoretical analysis and simulation results validate the superior performance of the SRB for small-scale and heavily loaded wireless LANs. When combined with an adaptive mechanism and a persistent backoff process, SRB can also be effective for large-scale and lightly loaded wireless networks.",
"Abstract —Random access protocols have been the mechanismof choice for most WLANs, thanks to their simplicity anddistributed nature. Nevertheless, these advantages come at theprice of sub-optimal channel utilization because of empty slotsand collisions. In previous random access protocols, the stationstransmit on the channel without any clue of other stations’intentions to transmit. In this article we provide a framework tostudy the efficiency of channel access protocols. This frameworkis used to analyze the efficiency of the Binary Exponential Backoffmechanism and the maximum achievable efficiency that can beobtained from any completely random access protocol. Then wepropose Learning-BEB (L-BEB).L-BEB is exactly the same as legacy BEB, with one exception:L-BEB chooses a deterministic backoff value after a successfultransmission. We call this value the virtual frame size ( V ). Thissubtle modification significantly reduces the number of collisions.It can be observed that, as the system runs, the number of colli-sions is progressively reduced. Thus we conclude that the systemlearns. Further, if the number of contending stations is equal orlower than"
]
}
|
1009.4386
|
1586023924
|
By combining the features of CSMA and TDMA, fully decentralised WLAN MAC schemes have recently been proposed that converge to collision-free schedules. In this paper we describe a MAC with optimal long-run throughput that is almost decentralised. We then design two schemes that are practically realisable, decentralised approximations of this optimal scheme and operate with different amounts of sensing information. We achieve this by (1) introducing learning algorithms that can substantially speed up convergence to collision free operation; (2) developing a decentralised schedule length adaptation scheme that provides long-run fair (uniform) access to the medium while maintaining collision-free access for arbitrary numbers of stations.
|
In hashing backoff @cite_18 each station chooses its backoff value by using asymptotically orthogonal hashing functions. Its aim is to converge to a collision-free state. One structural difference from L-BEB @cite_3 is that @cite_18 introduces an algorithm to dynamically adapt the schedule length using a technique similar to Idle Sense @cite_1 . The broad principles of these MAC protocols are similar and both have the drawbacks of slower convergence speed to a collision-free state and lower robustness to new entrants to the wireless network, relative to our improvements.
|
{
"cite_N": [
"@cite_18",
"@cite_1",
"@cite_3"
],
"mid": [
"1530164435",
"2104935654",
"90489852"
],
"abstract": [
"In this paper, we propose Hashing Backoff , an access method in which stations select backoff values by means of asymptotically orthogonal hashing functions, so that contending stations converge to a collision-free state. This solution is a half-way between TDMA, CDMA, and random access. Our simulations show that it presents significant improvement over Idle Sense , the access method with much better performance that the standard 802.11 DCF. The fact that the proposed method focuses on reducing collisions makes it particularly interesting for some specific applications such as sensor networks in which eliminating collisions leads to energy savings.",
"We consider wireless LANs such as IEEE 802.11 operating in the unlicensed radio spectrum. While their nominal bit rates have increased considerably, the MAC layer remains practically unchanged despite much research effort spent on improving its performance. We observe that most proposals for tuning the access method focus on a single aspect and disregard others. Our objective is to define an access method optimized for throughput and fairness, able to dynamically adapt to physical channel conditions, to operate near optimum for a wide range of error rates, and to provide equal time shares when hosts use different bit rates.We propose a novel access method derived from 802.11 DCF [2] (Distributed Coordination Function) in which all hosts use similar values of the contention window CW to benefit from good short-term access fairness. We call our method Idle Sense, because each host observes the mean number of idle slots between transmission attempts to dynamically control its contention window. Unlike other proposals, Idle Sense enables each host to estimate its frame error rate, which can be used for switching to the right bit rate. We present simulations showing how the method leads to high throughput, low collision overhead, and low delay. The method also features fast reactivity and time-fair channel allocation.",
"Abstract —Random access protocols have been the mechanismof choice for most WLANs, thanks to their simplicity anddistributed nature. Nevertheless, these advantages come at theprice of sub-optimal channel utilization because of empty slotsand collisions. In previous random access protocols, the stationstransmit on the channel without any clue of other stations’intentions to transmit. In this article we provide a framework tostudy the efficiency of channel access protocols. This frameworkis used to analyze the efficiency of the Binary Exponential Backoffmechanism and the maximum achievable efficiency that can beobtained from any completely random access protocol. Then wepropose Learning-BEB (L-BEB).L-BEB is exactly the same as legacy BEB, with one exception:L-BEB chooses a deterministic backoff value after a successfultransmission. We call this value the virtual frame size ( V ). Thissubtle modification significantly reduces the number of collisions.It can be observed that, as the system runs, the number of colli-sions is progressively reduced. Thus we conclude that the systemlearns. Further, if the number of contending stations is equal orlower than"
]
}
|
1009.4386
|
1586023924
|
By combining the features of CSMA and TDMA, fully decentralised WLAN MAC schemes have recently been proposed that converge to collision-free schedules. In this paper we describe a MAC with optimal long-run throughput that is almost decentralised. We then design two schemes that are practically realisable, decentralised approximations of this optimal scheme and operate with different amounts of sensing information. We achieve this by (1) introducing learning algorithms that can substantially speed up convergence to collision free operation; (2) developing a decentralised schedule length adaptation scheme that provides long-run fair (uniform) access to the medium while maintaining collision-free access for arbitrary numbers of stations.
|
A randomised MAC scheme for wireless mesh networks is proposed in @cite_15 that also aims to construct a collision-free schedule. The scheme allocates multiple fixed-length slots in a fixed-length schedule to satisfy station demands using on-hop message passing. If additional sensing information is available, the authors also show how to improve convergence of the algorithm through the use of extra state information.
|
{
"cite_N": [
"@cite_15"
],
"mid": [
"2127380142"
],
"abstract": [
"Aggregate traffic loads and topology in multihop wireless networks may vary slowly, permitting MAC protocols to \"learn\" how to spatially coordinate and adapt contention patterns. Such an approach could reduce contention, leading to better throughput. To that end, we propose a family of MAC scheduling algorithms and demonstrate general conditions, which, if satisfied, ensure lattice rate optimality (i.e., achieving any rate-point on a uniform discrete lattice within the throughput region). This general framework enables the design of MAC protocols that meet various objectives and conditions. In this paper, as instances of such a lattice-rate-optimal family, we propose distributed, synchronous contention-based scheduling algorithms that: 1) are lattice-rate-optimal under both the signal-to-interference-plus-noise ratio (SINR)-based and graph-based interference models; 2) do not require node location information; and 3) only require three-stage RTS CTS message exchanges for contention signaling. Thus, the protocols are amenable to simple implementation and may be robust to network dynamics such as topology and load changes. Finally, we propose a heuristic, which also belongs to the proposed lattice-rate-optimal family of protocols and achieves faster convergence, leading to a better transient throughput."
]
}
|
1009.4386
|
1586023924
|
By combining the features of CSMA and TDMA, fully decentralised WLAN MAC schemes have recently been proposed that converge to collision-free schedules. In this paper we describe a MAC with optimal long-run throughput that is almost decentralised. We then design two schemes that are practically realisable, decentralised approximations of this optimal scheme and operate with different amounts of sensing information. We achieve this by (1) introducing learning algorithms that can substantially speed up convergence to collision free operation; (2) developing a decentralised schedule length adaptation scheme that provides long-run fair (uniform) access to the medium while maintaining collision-free access for arbitrary numbers of stations.
|
ZC is proposed in @cite_17 . We can regard ZC as being similar to L-BEB in that on success it effectively chooses a fixed backoff. On failure, however, a station looks at the occupancy of slots in the previous schedule. The station chooses uniformly between the slot it failed on previously and the slots that were idle in the last schedule. By avoiding other busy slots, which other stations have reserved', ZC finds a collision-free allocation more quickly than other schemes.
|
{
"cite_N": [
"@cite_17"
],
"mid": [
"1484545291"
],
"abstract": [
"This paper proposes and analyzes a distributed MAC protocol that achieves zero collision with no control message exchange nor synchronization. ZC (ZeroCollision) is neither reservation-based nor dynamic TDMA; the protocol supports variable-length packets and does not lose efficiency when some of the stations do not transmit. At the same time, ZC is not a CSMA; in its steady state, it is completely collision-free. The stations transmit repeatedly in a round-robin order once the convergence state is reached. If some stations skip their turn, their transmissions are replaced by idle @math -second mini-slots that enable the other stations to keep track of their order. Because of its short medium access delay and its efficiency, the protocol supports both real-time and elastic applications. The protocol allows for nodes leaving and joining the network; it can allocate more throughput to specific nodes (such as an access point). The protocol is robust against carrier sensing errors or clock drift. While collision avoidance is guaranteed in a single collision domain, it is not the case in a multiple collision one. However, experiments show ZC supports a comparable amount of goodput to CSMA in a multiple collision domain environment. The paper presents an analysis and extensive simulations of the protocol, confirming that ZC outperforms both CSMA and TDMA at high and low load."
]
}
|
1009.3665
|
2951850921
|
Modern scientific repositories are growing rapidly in size. Scientists are increasingly interested in viewing the latest data as part of query results. Current scientific middleware cache systems, however, assume repositories are static. Thus, they cannot answer scientific queries with the latest data. The queries, instead, are routed to the repository until data at the cache is refreshed. In data-intensive scientific disciplines, such as astronomy, indiscriminate query routing or data refreshing often results in runaway network costs. This severely affects the performance and scalability of the repositories and makes poor use of the cache system. We present Delta, a dynamic data middleware cache system for rapidly-growing scientific repositories. Delta's key component is a decision framework that adaptively decouples data objects---choosing to keep some data object at the cache, when they are heavily queried, and keeping some data objects at the repository, when they are heavily updated. Our algorithm profiles incoming workload to search for optimal data decoupling that reduces network costs. It leverages formal concepts from the network flow problem, and is robust to evolving scientific workloads. We evaluate the efficacy of Delta, through a prototype implementation, by running query traces collected from a real astronomy survey.
|
@cite_30 consider a proxy cache for stock market data in which, adaptively, either updates to a data object are pushed by the server or are pulled by the client. Their method is limited primarily to single valued objects, such as stock prices and point queries. The tradeoff between query shipping and update propagation are explored in online view materialization systems @cite_18 @cite_4 . The primary focus is on minimizing response time to satisfy currency of queries. In most systems an unlimited cache size is assumed. To compare, in we have developed an algorithm that uses workload heuristics similar to the algorithm developed in @cite_4 . The algorithm minimizes network traffic instead of response time. Experiments show that such algorithms perform poorly on scientific workloads.
|
{
"cite_N": [
"@cite_30",
"@cite_18",
"@cite_4"
],
"mid": [
"2169576055",
"",
"2031513076"
],
"abstract": [
"An important issue in the dissemination of time-varying Web data such as sports scores and stock prices is the maintenance of temporal coherency. In the case of servers adhering to the HTTP protocol, clients need to frequently pull the data based on the dynamics of the data and a user's coherency requirements. In contrast, servers that possess push capability maintain state information pertaining to clients and push only those changes that are of interest to a user. These two canonical techniques have complementary properties with respect to the level of temporal coherency maintained, communication overheads, state space overheads, and loss of coherency due to (server) failures. In this paper, we show how to combine push and pull-based techniques to achieve the best features of both approaches. Our combined technique tailors the dissemination of data from servers to clients based on 1) the capabilities and load at servers and proxies and 2) clients' coherency requirements. Our experimental results demonstrate that such adaptive data dissemination is essential to meet diverse temporal coherency requirements, to be resilient to failures, and for the efficient and scalable utilization of server and network resources.",
"",
"Personalization, advertising, and the sheer volume of online data generate a staggering amount of dynamic Web content. In addition to Web caching, view materialization has been shown to accelerate the generation of dynamic Web content. View materialization is an attractive solution as it decouples the serving of access requests from the handling of updates. In the context of the Web, selecting which views to materialize must be decided online and needs to consider both performance and data freshness, which we refer to as the online view selection problem. In this paper, we define data freshness metrics, provide an adaptive algorithm for the online view selection problem that is based on user-specified data freshness requirements, and present experimental results. Furthermore, we examine alternative metrics for data freshness and extend our proposed algorithm to handle multiple users and alternative definitions of data freshness."
]
}
|
1009.3665
|
2951850921
|
Modern scientific repositories are growing rapidly in size. Scientists are increasingly interested in viewing the latest data as part of query results. Current scientific middleware cache systems, however, assume repositories are static. Thus, they cannot answer scientific queries with the latest data. The queries, instead, are routed to the repository until data at the cache is refreshed. In data-intensive scientific disciplines, such as astronomy, indiscriminate query routing or data refreshing often results in runaway network costs. This severely affects the performance and scalability of the repositories and makes poor use of the cache system. We present Delta, a dynamic data middleware cache system for rapidly-growing scientific repositories. Delta's key component is a decision framework that adaptively decouples data objects---choosing to keep some data object at the cache, when they are heavily queried, and keeping some data objects at the repository, when they are heavily updated. Our algorithm profiles incoming workload to search for optimal data decoupling that reduces network costs. It leverages formal concepts from the network flow problem, and is robust to evolving scientific workloads. We evaluate the efficacy of Delta, through a prototype implementation, by running query traces collected from a real astronomy survey.
|
More recent work @cite_7 has focused on minimizing the network traffic. However the problem is focused on communicating just the current value of a single object. As a result the proposed algorithms do not scale for scientific repositories in which objects have multiple values. Alternatively, @cite_13 @cite_8 consider the precision-based approach to reduce network costs. In their approach, users specify precision requirements for each query, instead of currency requirements or tolerance for staleness. In scientific applications such as the SDSS, users have zero tolerance for approximate or incorrect values for real attributes as an imprecise result directly impacts scientific accuracy.
|
{
"cite_N": [
"@cite_13",
"@cite_7",
"@cite_8"
],
"mid": [
"2110849300",
"2130987148",
""
],
"abstract": [
"Caching approximate values instead of exact values presents an opportunity for performance gains in exchange for decreased precision. To maximize the performance improvement, cached approximations must be of appropriate precision: approximations that are too precise easily become invalid, requiring frequent refreshing, while overly imprecise approximations are likely to be useless to applications, which must then bypass the cache. We present a parameterized algorithm for adjusting the precision of cached approximations adaptively to achieve the best performance as data values, precision requirements, or workload vary. We consider interval approximations to numeric values but our ideas can be extended to other kinds of data and approximations. Our algorithm strictly generalizes previous adaptive caching algorithms for exact copies: we can set parameters to require that all approximations be exact, in which case our algorithm dynamically chooses whether or not to cache each data value. We have implemented our algorithm and tested it on synthetic and real-world data. A number of experimental results are reported, showing the effectiveness of our algorithm at maximizing performance, and also showing that in the special case of exact caching our algorithm performs as well as previous algorithms. In cases where bounded imprecision is acceptable, our algorithm easily outperforms previous algorithms for exact caching.",
"Proposes a new mechanism, divergence caching, for reducing access and communication charges in accessing online database servers. The objective is achieved by allowing tolerant read requests, namely requests that can be satisfied by out-of-date data. We propose two algorithms based on divergence caching-static and dynamic. The first is appropriate when the access pattern to an object in the database is fixed and known, and the latter is appropriate in other cases. We analyze these algorithms in the worst case and the expected case. >",
""
]
}
|
1009.3798
|
1529318459
|
Inference in probabilistic logic languages such as ProbLog, an extension of Prolog with probabilistic facts, is often based on a reduction to a propositional formula in DNF. Calculating the probability of such a formula involves the disjoint-sum-problem, which is computationally hard. In this work we introduce a new approximation method for ProbLog inference which exploits the DNF to focus sampling. While this DNF sampling technique has been applied to a variety of tasks before, to the best of our knowledge it has not been used for inference in probabilistic logic systems. The paper also presents an experimental comparison with another sampling based inference method previously introduced for ProbLog.
|
As mentioned earlier, DNF Sampling is based on the general sampling scheme introduced in @cite_14 . This scheme has been used for probability estimation in the context of probabilistic databases @cite_12 @cite_13 and probabilistic graph mining @cite_9 . In the context of statistical relational learning, the scheme has been used to estimate the number of true groundings of a clause @cite_2 . While the use of sampling in combination with a reduction to a DNF formula for probabilistic logic programs has already been proposed (but not realized) by @cite_7 , to the best of our knowledge, this paper is the first to actually use DNF Sampling for inference in a probabilistic logic programming system. The ProbLog system also includes approximate inference methods that do not use sampling, but rely on restricting the number of proofs encoded in the DNF @cite_3 .
|
{
"cite_N": [
"@cite_14",
"@cite_7",
"@cite_9",
"@cite_3",
"@cite_2",
"@cite_13",
"@cite_12"
],
"mid": [
"2066720893",
"2256219492",
"2084591860",
"",
"2121075864",
"2044494469",
"2126033088"
],
"abstract": [
"",
"The aim of this paper is to generalize logic programs, for dealing with probabilistic knowledge. Using the possible-worlds approach of probabilistic logic ([Nil]), we define probabilistic logic programs so that their clauses may be true or false with some probabilities and goals may succeed or fail with probabilities too. Probabilistic logic programs may contain negation, their semantics agrees with negation as failure (unlike probabilistic logic which is based on the standard logical negation).",
"Graph data are subject to uncertainties in many applications due to incompleteness and imprecision of data. Mining uncertain graph data is semantically different from and computationally more challenging than mining exact graph data. This paper investigates the problem of mining frequent subgraph patterns from uncertain graph data. The frequent subgraph pattern mining problem is formalized by designing a new measure called expected support. An approximate mining algorithm is proposed to find an approximate set of frequent subgraph patterns by allowing an error tolerance on the expected supports of the discovered subgraph patterns. The algorithm uses an efficient approximation algorithm to determine whether a subgraph pattern can be output or not. The analytical and experimental results show that the algorithm is very efficient, accurate and scalable for large uncertain graph databases.",
"",
"Markov logic networks (MLNs) combine logic and probability by attaching weights to first-order clauses, and viewing these as templates for features of Markov networks. In this paper we develop an algorithm for learning the structure of MLNs from relational databases, combining ideas from inductive logic programming (ILP) and feature induction in Markov networks. The algorithm performs a beam or shortest-first search of the space of clauses, guided by a weighted pseudo-likelihood measure. This requires computing the optimal weights for each candidate structure, but we show how this can be done efficiently. The algorithm can be used to learn an MLN from scratch, or to refine an existing knowledge base. We have applied it in two real-world domains, and found that it outperforms using off-the-shelf ILP systems to learn the MLN structure, as well as pure ILP, purely probabilistic and purely knowledge-based approaches.",
"Modern enterprise applications are forced to deal with unreliable, inconsistent and imprecise information. Probabilistic databases can model such data naturally, but SQL query evaluation on probabilistic databases is difficult: previous approaches have either restricted the SQL queries, or computed approximate probabilities, or did not scale, and it was shown recently that precise query evaluation is theoretically hard. In this paper we describe a novel approach, which computes and ranks efficiently the top-k answers to a SQL query on a probabilistic database. The restriction to top-k answers is natural, since imprecisions in the data often lead to a large number of answers of low quality, and users are interested only in the answers with the highest probabilities. The idea in our algorithm is to run in parallel several Monte-Carlo simulations, one for each candidate answer, and approximate each probability only to the extent needed to compute correctly the top-k answers.",
"Data exchange between embedded systems and other small or large computing device increases. Since data in different data sources may refer to the same real world objects, data cannot simply be merged. Furthermore, in many situations, conflicts in data about the same real world objects need to be resolved without interference from a user. We report on an attempt to make a RDBMS probabilistic, i.e., data in a relation represents all possible views on the real world, in order to achieve unattended data integration. We define a probabilistic relational data model and review standard SQL query primitives in the light of probabilistic data. It appears that thinking in terms of 'possible worlds' is powerful in determining the proper semantics of these query primitives."
]
}
|
1009.3800
|
11622882
|
With the dissemination of aordable parallel and distributed hardware, parallel and distributed constraint solving has lately been the focus of some attention. To eectually apply the power of distributed computational systems, there must be an eective sharing of the work involved in the search for a solution to a Constraint Satisfaction Problem (CSP) between all the participating agents, and it must happen dynami- cally, since it is hard to predict the eort associated with the exploration of some part of the search space. We describe and provide an experimen- tal assessment of an implementation of a work stealing-based approach to parallel CSP solving in a distributed setting.
|
More recent works rely on features of an underlying framework for programming parallel search. The concurrent Oz language provides the basis for the implementation described in @cite_7 , where search is encapsulated into computation spaces and a distributed implementation allows the distribution of workers. Work sharing is coordinated by a manager, which receives requests for work from the workers and then tries to find one willing to share the work it has left. Search strategies are user programmed and the work sharing strategy is implemented by the workers.
|
{
"cite_N": [
"@cite_7"
],
"mid": [
"185340237"
],
"abstract": [
"Search in constraint programming is a time consuming task. Search can be speeded up by exploring subtrees of a search tree in parallel. This paper presents distributed search engines that achieve parallelism by distribution across networked computers. The main point of the paper is a simple design of the parallel search engine. Simplicity comes as an immediate consequence of clearly separating search, concurrency, and distribution. The obtained distributed search engines are simple yet offer substantial speedup on standard network computers."
]
}
|
1009.3800
|
11622882
|
With the dissemination of aordable parallel and distributed hardware, parallel and distributed constraint solving has lately been the focus of some attention. To eectually apply the power of distributed computational systems, there must be an eective sharing of the work involved in the search for a solution to a Constraint Satisfaction Problem (CSP) between all the participating agents, and it must happen dynami- cally, since it is hard to predict the eort associated with the exploration of some part of the search space. We describe and provide an experimen- tal assessment of an implementation of a work stealing-based approach to parallel CSP solving in a distributed setting.
|
A focus of research has been on the strategies for splitting the work between workers. These strategies may be driven by the problem structure, such as the size of the domains @cite_8 , or by the past behaviour of the solver, be it related with properties of the solving process, such as the number of variables already instantiated @cite_6 , or with the progress of the search, in what it affects the prospects of finding a solution in the current subtree @cite_1 or in the subtrees left to explore @cite_4 .
|
{
"cite_N": [
"@cite_4",
"@cite_1",
"@cite_6",
"@cite_8"
],
"mid": [
"1601504107",
"2148639734",
"2151562519",
"22531620"
],
"abstract": [
"The most popular architecture for parallel search is work stealing: threads that have run out of work (nodes to be searched) steal from threads that still have work. Work stealing not only allows for dynamic load balancing, but also determines which parts of the search tree are searched next. Thus the place from where work is stolen has a dramatic effect on the efficiency of a parallel search algorithm. This paper examines quantitatively how optimal work stealing can be performed given an estimate of the relative solution densities of the subtrees at each search tree node and relates it to the branching heuristic strength. An adaptive work stealing algorithm is presented that automatically performs different work stealing strategies based on the confidence of the branching heuristic at each node. Many parallel depth-first search patterns arise naturally from this algorithm. The algorithm produces near perfect or super linear algorithmic efficiencies on all problems tested. Real speedups using 8 threads range from 7 times to super linear.",
"A distributed concurrent search algorithm for distributed constraint satisfaction problems (DisCSPs) is presented. Concurrent search algorithms are composed of multiple search processes (SPs) that operate concurrently and scan non-intersecting parts of the global search space. Each SP is represented by a unique data structure, containing a current partial assignment (CPA), that is circulated among the different agents. Search processes are generated dynamically, started by the initializing agent, and by any number of agents during search.In the proposed, ConcDB, algorithm, all search processes perform dynamic backtracking. As a consequence of backjumping, a search space can be found unsolvable by a different search process. This enhances the efficiency of the ConcDB algorithm. Concurrent Dynamic Backtracking is an asynchronous distributed algorithm and is shown to be faster than former algorithms for solving DisCSPs. Experimental evaluation of ConcDB, on randomly generated DisCSPs demonstrates that the network load of ConcDB is similar to the network load of synchronous backtracking and is much lower than that of asynchronous backtracking. The advantage of Concurrent Search is more pronounced in the presence of imperfect communication, when messages are randomly delayed.",
"Program parallelization and distribution becomes increasingly important when new multi-core architectures and cheaper cluster technology provide ways to improve performance. Using declarative languages, such as constraint programming, can make the transition to parallelism easier for the programmer. In this paper, we address parallel and distributed search in constraint programming (CP) by proposing several load-balancing methods. We show how these methods improve the execution-time scalability of constraint programs. Scalability is the greatest challenge of parallelism and it is particularly an issue in constraint programming, where load-balancing is difficult. We address this problem by proposing CP-specific load-balancing methods and evaluating them on a cluster by using benchmark problems. Our experimental results show that the methods behave differently well depending on the type of problem and the type of search. This gives the programmer the opportunity to optimize the performance for a particular problem.",
""
]
}
|
1009.4375
|
2953128076
|
A (q,k,t)-design matrix is an m x n matrix whose pattern of zeros non-zeros satisfies the following design-like condition: each row has at most q non-zeros, each column has at least k non-zeros and the supports of every two columns intersect in at most t rows. We prove that the rank of any (q,k,t)-design matrix over a field of characteristic zero (or sufficiently large finite characteristic) is at least n - (qtn 2k)^2 . Using this result we derive the following applications: (1) Impossibility results for 2-query LCCs over the complex numbers: A 2-query locally correctable code (LCC) is an error correcting code in which every codeword coordinate can be recovered, probabilistically, by reading at most two other code positions. Such codes have numerous applications and constructions (with exponential encoding length) are known over finite fields of small characteristic. We show that infinite families of such linear 2-query LCCs do not exist over the complex numbers. (2) Generalization of results in combinatorial geometry: We prove a quantitative analog of the Sylvester-Gallai theorem: Let @math be a set of points in @math such that for every @math there exists at least @math values of @math such that the line through @math contains a third point in the set. We show that the dimension of @math is at most @math . Our results generalize to the high dimensional case (replacing lines with planes, etc.) and to the case where the points are colored (as in the Motzkin-Rabin Theorem).
|
The idea to use matrix scaling to study structural properties of matrices was already present in @cite_14 . This work, which was also motivated by the problem of matrix rigidity, studies the presence of short cycles in the graphs of non-zero entries of a square matrix.
|
{
"cite_N": [
"@cite_14"
],
"mid": [
"1969872431"
],
"abstract": [
"Abstract We consider the problem of the presence of short cycles in the graphs of nonzero elements of matrices which have sublinear rank and nonzero entries on the main diagonal, and analyze the connection between these properties and the rigidity of matrices. In particular, we exhibit a family of matrices which shows that sublinear rank does not imply the existence of triangles. This family can also be used to give a constructive bound of the order of k 3 2 on the Ramsey number R(3,k) , which matches the best-known bound. On the other hand, we show that sublinear rank implies the existence of 4-cycles. Finally, we prove some partial results towards establishing lower bounds on matrix rigidity and consequently on the size of logarithmic depth arithmetic circuits for computing certain explicit linear transformations."
]
}
|
1009.4375
|
2953128076
|
A (q,k,t)-design matrix is an m x n matrix whose pattern of zeros non-zeros satisfies the following design-like condition: each row has at most q non-zeros, each column has at least k non-zeros and the supports of every two columns intersect in at most t rows. We prove that the rank of any (q,k,t)-design matrix over a field of characteristic zero (or sufficiently large finite characteristic) is at least n - (qtn 2k)^2 . Using this result we derive the following applications: (1) Impossibility results for 2-query LCCs over the complex numbers: A 2-query locally correctable code (LCC) is an error correcting code in which every codeword coordinate can be recovered, probabilistically, by reading at most two other code positions. Such codes have numerous applications and constructions (with exponential encoding length) are known over finite fields of small characteristic. We show that infinite families of such linear 2-query LCCs do not exist over the complex numbers. (2) Generalization of results in combinatorial geometry: We prove a quantitative analog of the Sylvester-Gallai theorem: Let @math be a set of points in @math such that for every @math there exists at least @math values of @math such that the line through @math contains a third point in the set. We show that the dimension of @math is at most @math . Our results generalize to the high dimensional case (replacing lines with planes, etc.) and to the case where the points are colored (as in the Motzkin-Rabin Theorem).
|
Another place where the support of a matrix is connected to its rank is in graph theory where we are interested in minimizing the rank of a (square, symmetric) real matrix which has the same support as the adjacency matrix of a given graph. This line of work goes back for over fifty years and has many applications in graph theory. See @cite_11 for a recent survey on this topic.
|
{
"cite_N": [
"@cite_11"
],
"mid": [
"2056415888"
],
"abstract": [
"The minimum rank of a simple graph G is defined to be the smallest possible rank over all symmetric real matrices whose ijth entry (for i≠j) is nonzero whenever i,j is an edge in G and is zero otherwise. This paper surveys the current state of knowledge on the problem of determining the minimum rank of a graph and related issues."
]
}
|
1009.2900
|
2952495389
|
Constraint Handling Rules (CHR) is a declarative committed-choice programming language with a strong relationship to linear logic. Its generalization CHR with Disjunction (CHRv) is a multi-paradigm declarative programming language that allows the embedding of horn programs. We analyse the assets and the limitations of the classical declarative semantics of CHR before we motivate and develop a linear-logic declarative semantics for CHR and CHRv. We show how to apply the linear-logic semantics to decide program properties and to prove operational equivalence of CHRv programs across the boundaries of language paradigms.
|
Common linear logic programming languages such as LO @cite_3 , Lolli @cite_9 , LinLog @cite_17 , and Lygon @cite_7 rely on generalizations of backward-chaining backtracking resolution of horn clauses.
|
{
"cite_N": [
"@cite_9",
"@cite_7",
"@cite_3",
"@cite_17"
],
"mid": [
"2106458801",
"1492862327",
"1970743187",
"2070324762"
],
"abstract": [
"The intuitionistic notion of context is refined by using a fragment of J.-Y. Girard's (Theor. Comput. Sci., vol.50, p.1-102, 1987) linear logic that includes additive and multiplicative conjunction, linear implication, universal quantification, the of course exponential, and the constants for the empty context and for the erasing contexts. It is shown that the logic has a goal-directed interpretation. It is also shown that the nondeterminism that results from the need to split contexts in order to prove a multiplicative conjunction can be handled by viewing proof search as a process that takes a context, consumes part of it, and returns the rest (to be consumed elsewhere). Examples taken from theorem proving, natural language parsing, and database programming are presented: each example requires a linear, rather than intuitionistic, notion of context to be modeled adequately. >",
"For many given systems of logic, it is possible to identify, via systematic proof-theoretic analyses, a fragment which can be used as a basis for a logic programming language. Such analyses have been applied to linear logic, a logic of resource-consumption, leading to the definition of the linear logic programming language Lygon. It appears that (the basis of) Lygon can be considered to be the largest possible first-order linear logic programming language derivable in this way. In this paper, we describe the design and application of Lygon. We give examples which illustrate the advantages of resource-oriented logic programming languages.",
"We introduce a novel concurrent logic programming language, which we call LO, based on an extension of Horn logic. This language enhances the process view of objects implementable in Horn-based concurrent logic programming languages with powerful capabilities for knowledge structuring, leading to a flexible form of variable-structure inheritance. The main novelty about LO is a new kind of OR-concurrency which is dual to the usual AND-concurrency and provides us with the notion of structured process. Such OR-concurrency can be nicely characterized with a sociological metaphor as modelling the internal distribution of tasks inside a complex organization; this complements the external cooperation among different entities accounted for by AND-concurrency .",
""
]
}
|
1009.2900
|
2952495389
|
Constraint Handling Rules (CHR) is a declarative committed-choice programming language with a strong relationship to linear logic. Its generalization CHR with Disjunction (CHRv) is a multi-paradigm declarative programming language that allows the embedding of horn programs. We analyse the assets and the limitations of the classical declarative semantics of CHR before we motivate and develop a linear-logic declarative semantics for CHR and CHRv. We show how to apply the linear-logic semantics to decide program properties and to prove operational equivalence of CHRv programs across the boundaries of language paradigms.
|
The earliest approach at defining a linear-logic semantics for a committed-choice programming language that we are aware of has been proposed in @cite_16 . The corresponding language is indeed a fragment of pure CHR without multiple heads and with substantial restrictions on the use of built-in constraints.
|
{
"cite_N": [
"@cite_16"
],
"mid": [
"1534027338"
],
"abstract": [
"The paper deals with the relationship of committed-choice logic programming languages and their proof-theoretic semantics based on linear logic. Fragments of linear logic are used in order to express various aspects of guarded clause concurrent programming and behavior of the system. The outlined translation comprises structural properties of concurrent computations, providing a sound and complete model wrt. to the interleaving operational semantics based on transformation systems. In the presence of variables, just asynchronous properties are captured without resorting to special proof-generating strategies, so the model is only correct for deadlock-free programs."
]
}
|
1009.2900
|
2952495389
|
Constraint Handling Rules (CHR) is a declarative committed-choice programming language with a strong relationship to linear logic. Its generalization CHR with Disjunction (CHRv) is a multi-paradigm declarative programming language that allows the embedding of horn programs. We analyse the assets and the limitations of the classical declarative semantics of CHR before we motivate and develop a linear-logic declarative semantics for CHR and CHRv. We show how to apply the linear-logic semantics to decide program properties and to prove operational equivalence of CHRv programs across the boundaries of language paradigms.
|
The linear-logic programming language LolliMon, proposed in @cite_10 , integrates backward-chaining proof search with committed-choice forward reasoning. It is an extension of the aforementioned language Lolli. The sequent calculus underlying Lolli extended by a set of dedicated inference rules. The corresponding connectives are syntactically detached from Lolli's own connectives and operationally they are processed within a monad. The actual committed-choice behaviour comes by the explicit statement in the operational semantics, that these inference are to be applied in a committed-choice manner during proof search. With respect to Lolli, committed comes thus comes at the cost of giving up the general notion of execution as proof search, although it is retained outside the monad.
|
{
"cite_N": [
"@cite_10"
],
"mid": [
"1963779317"
],
"abstract": [
"Lolli is a logic programming language based on the asynchronous propositions of intuitionistic linear logic. It uses a backward chaining, backtracking operational semantics. In this paper we extend Lolli with the remaining connectives of intuitionistic linear logic restricted to occur inside a monad, an idea taken from the concurrent logical framework (CLF). The resulting language, called LolliMon, has a natural forward chaining, committed choice operational semantics inside the monad, while retaining Lolli's semantics outside the monad. LolliMon thereby cleanly integrates both concurrency and saturation with logic programming search. We illustrate its expressive power through several examples including an implementation of the pi-calculus, a call-by-need lambda-calculus, and several saturating algorithms presented in logical form."
]
}
|
1009.2900
|
2952495389
|
Constraint Handling Rules (CHR) is a declarative committed-choice programming language with a strong relationship to linear logic. Its generalization CHR with Disjunction (CHRv) is a multi-paradigm declarative programming language that allows the embedding of horn programs. We analyse the assets and the limitations of the classical declarative semantics of CHR before we motivate and develop a linear-logic declarative semantics for CHR and CHRv. We show how to apply the linear-logic semantics to decide program properties and to prove operational equivalence of CHRv programs across the boundaries of language paradigms.
|
More recently, proposed the linear logic-based committed-choice programming language @cite_4 . While the language itself corresponds to a segment of pure CHR, the aim of the work is to define a cost semantics for algorithms that feature non-deteministic choices.
|
{
"cite_N": [
"@cite_4"
],
"mid": [
"1502855507"
],
"abstract": [
"Bottom-up logic programming can be used to declaratively specify many algorithms in a succinct and natural way, and McAllester and Ganzinger have shown that it is possible to define a cost semantics that enables reasoning about the running time of algorithms written as inference rules. Previous work with the programming language Lollimon demonstrates the expressive power of logic programming with linear logic in describing algorithms that have imperative elements or that must repeatedly make mutually exclusive choices. In this paper, we identify a bottom-up logic programming language based on linear logic that is amenable to efficient execution and describe a novel cost semantics that can be used for complexity analysis of algorithms expressed in linear logic."
]
}
|
1009.3455
|
1652577720
|
This paper presents a novel methodology to develop scheduling algorithms. The scheduling problem is phrased as a control problem, and control-theoretical techniques are used to design a scheduling algorithm that meets specific requirements. Unlike most approaches to feedback scheduling, where a controller integrates a "basic" scheduling algorithm and dynamically tunes its parameters and hence its performances, our methodology essentially reduces the design of a scheduling algorithm to the synthesis of a controller that closes the feedback loop. This approach allows the re-use of control-theoretical techniques to design efficient scheduling algorithms; it frames and solves the scheduling problem in a general setting; and it can naturally tackle certain peculiar requirements such as robustness and dynamic performance tuning. A few experiments demonstrate the feasibility of the approach on a real-time benchmark.
|
's book @cite_14 is a comprehensive review of the applications of control theory to computing-system problems such as bandwidth allocation and unpredictable data traffic management. In general, control theory is applied to make computing systems adaptive, more robust, and stable. Adaptability, in particular, characterizes the response required in applications whose operating conditions change rapidly and unpredictably.
|
{
"cite_N": [
"@cite_14"
],
"mid": [
"2015244008"
],
"abstract": [
"Preface. PART I: BACKGROUND. 1. Introduction and Overview. PART II: SYSTEM MODELING. 2. Model Construction. 3. Z-Transforms and Transfer Functions. 4. System Modeling with Block Diagrams. 5. First-Order Systems. 6. Higher-Order Systems. 7. State-Space Models. PART III: CONTROL ANALYSIS AND DESIGN. 8. Proportional Control. 9. PID Controllers. 10. State-Space Feedback Control. 11. Advanced Topics. Appendix A: Mathematical Notation. Appendix B: Acronyms. Appendix C: Key Results. Appendix D: Essentials of Linear Algebra. Appendix E: MATLAB Basics. References. Index."
]
}
|
1009.3455
|
1652577720
|
This paper presents a novel methodology to develop scheduling algorithms. The scheduling problem is phrased as a control problem, and control-theoretical techniques are used to design a scheduling algorithm that meets specific requirements. Unlike most approaches to feedback scheduling, where a controller integrates a "basic" scheduling algorithm and dynamically tunes its parameters and hence its performances, our methodology essentially reduces the design of a scheduling algorithm to the synthesis of a controller that closes the feedback loop. This approach allows the re-use of control-theoretical techniques to design efficient scheduling algorithms; it frames and solves the scheduling problem in a general setting; and it can naturally tackle certain peculiar requirements such as robustness and dynamic performance tuning. A few experiments demonstrate the feasibility of the approach on a real-time benchmark.
|
@cite_5 @cite_7 present contributions in this context, for the regulation of the service levels of a web server. The variable to be controlled is the delay between the arrival time of a request and the time it starts being processed. The goal is to keep this delay to within some desired range; the range depends on the class of each request. An interesting point of this works is the distinction between the transient and steady state performances, in the presence of variable traffic. This feature motivates a feedback-control approach to many computing-system performance problems.
|
{
"cite_N": [
"@cite_5",
"@cite_7"
],
"mid": [
"2096441239",
"2144530326"
],
"abstract": [
"The paper presents the design, implementation, and evaluation of an adaptive architecture to provide relative delay guarantees for different service classes on Web servers under HTTP 1.1. The first contribution of the paper is the architecture based on a feedback control loop that enforces desired relative delays among classes via dynamic connection scheduling and process reallocation. The second contribution is our use of feedback control theory to design the feedback loop with proven performance guarantees. In contrast with ad hoc approaches that often rely on laborious tuning and design iterations, our control theory approach enables us to systematically design an adaptive Web server with established analytical methods. The design methodology includes using system identification to establish a dynamic model, and using the Root Locus method to design a feedback controller to satisfy performance specifications of a Web server. The adaptive architecture has been implemented by modifying an Apache Web server. Experimental results demonstrate that our adaptive server achieves robust relative delay guarantees even when workload varies significantly. Properties of our adaptive Web server include guaranteed stability, and satisfactory efficiency and accuracy in achieving the desired relative delay differentiation.",
"This paper presents the design and implementation of an adaptive Web server architecture to provide relative and absolute connection delay guarantees for different service classes. The first contribution of this paper is an adaptive architecture based on feedback control loops that enforce desired connection delays via dynamic connection scheduling and process reallocation. The second contribution is the use of control theoretic techniques to model and design the feedback loops with desired dynamic performance. In contrast to heuristics-based approaches that rely on laborious hand-tuning and testing iteration, the control theoretic approach enables systematic design of an adaptive Web server with established analytical methods. The adaptive architecture has been implemented by modifying an Apache server. Experimental results demonstrate that the adaptive server provides robust delay guarantees even when workload varies significantly"
]
}
|
1009.3455
|
1652577720
|
This paper presents a novel methodology to develop scheduling algorithms. The scheduling problem is phrased as a control problem, and control-theoretical techniques are used to design a scheduling algorithm that meets specific requirements. Unlike most approaches to feedback scheduling, where a controller integrates a "basic" scheduling algorithm and dynamically tunes its parameters and hence its performances, our methodology essentially reduces the design of a scheduling algorithm to the synthesis of a controller that closes the feedback loop. This approach allows the re-use of control-theoretical techniques to design efficient scheduling algorithms; it frames and solves the scheduling problem in a general setting; and it can naturally tackle certain peculiar requirements such as robustness and dynamic performance tuning. A few experiments demonstrate the feasibility of the approach on a real-time benchmark.
|
Scheduling is certainly one of these problems where the transient to steady-state distinction features strongly. Indeed, many scheduling approaches solve essentially the same problem in different operating conditions. This is one of the main reasons why has received much attention in recent years (see Xia and Sun @cite_18 for a concise review of the topic). As we argued already in the introduction, the standard approach in feedback scheduling consists in closing some control loop around an existing scheduler'' to adjust its parameters to the varying load conditions. This may yield performance improvements, but it falls short of fully exploiting the rich toolset of control theory.
|
{
"cite_N": [
"@cite_18"
],
"mid": [
"1489544630"
],
"abstract": [
"Despite rapid evolution, embedded computing systems increasingly feature resource constraints and workload uncertainties. To achieve much better system performance in unpredictable environments than traditional design approaches, a novel methodology, control-scheduling codesign, is emerging in the context of integrating feedback control and real-time computing. The aim of this work is to provide a better understanding of this emerging methodology and to spark new interests and developments in both the control and computer science communities. The state of the art of control-scheduling codesign is captured. Relevant research efforts in the literature are discussed under two categories, i.e., control of computing systems and codesign for control systems. Critical open research issues on integrating control and computing are also outlined."
]
}
|
1009.3455
|
1652577720
|
This paper presents a novel methodology to develop scheduling algorithms. The scheduling problem is phrased as a control problem, and control-theoretical techniques are used to design a scheduling algorithm that meets specific requirements. Unlike most approaches to feedback scheduling, where a controller integrates a "basic" scheduling algorithm and dynamically tunes its parameters and hence its performances, our methodology essentially reduces the design of a scheduling algorithm to the synthesis of a controller that closes the feedback loop. This approach allows the re-use of control-theoretical techniques to design efficient scheduling algorithms; it frames and solves the scheduling problem in a general setting; and it can naturally tackle certain peculiar requirements such as robustness and dynamic performance tuning. A few experiments demonstrate the feasibility of the approach on a real-time benchmark.
|
For example, in @cite_8 , the controller adjusts the reservation time (i.e., the time the scheduler assigns to each task) with the purpose of keeping the system utilization below a specified upper bound. The plant is instead a switching system with two different states, according to whether the system can satisfy the total amount of CPU requests or not. Some tests with a real-time Linux kernel show that the adaptation mechanism proposed is useful for to improve quality-of-service measurements. Continuing in the same line of work, Palopoli and Abeni @cite_10 combine a reservation-based scheduler and a feedback-based adaptation mechanism to identify the best parameter set for a given workload. pursue a similar approach @cite_17 where they integrate feedback models with optimization techniques.
|
{
"cite_N": [
"@cite_10",
"@cite_17",
"@cite_8"
],
"mid": [
"2099503301",
"2165935513",
"2156946069"
],
"abstract": [
"A remarkable research activity has been carried out in the past few years to support real-time applications with appropriate scheduling solutions. Unfortunately, most of such techniques can be used only if real-time applications use a specialized API, and if some important information (such as the worst-case execution-time) are known a priori. In this paper, we present a novel technique, the legacy feedback scheduler (LFS), for a class of legacy applications that need the support of a real-time scheduler but are not written using a specialized API and have unknown or varying execution requirements. The approach is based on the combination of a resource reservation scheduler and a feedback-based adaptation mechanism for identifying the correct scheduling parameters.",
"In this paper, we develop an adaptive scheduling framework for changing the processor shares of tasks - a process called reweighting - on real-time multiprocessor platforms. Our particular focus is adaptive frameworks that are deployed in environments in which tasks may frequently require significant share changes. Prior work on enabling real-time adaptivity on multiprocessors has focused exclusively on scheduling algorithms that can enact needed adaptations. The algorithm proposed in this paper uses both feedback and optimization techniques to determine at runtime which adaptations are needed.",
"When executing soft real-time tasks in a shared processor, it is important to properly allocate the computational resources such that the quality of service requirements of each task are satisfied. In this paper we propose Adaptive Reservations, based on applying a feedback scheme to a reservation based scheduler After providing a precise mathematical model of the scheduler, we describe how this model can be used for synthesising the controller by applying results from control theory. Finally, we show the effectiveness of our method by simulation and by experiments with an MPEG player running on a modified Linux kernel."
]
}
|
1009.3455
|
1652577720
|
This paper presents a novel methodology to develop scheduling algorithms. The scheduling problem is phrased as a control problem, and control-theoretical techniques are used to design a scheduling algorithm that meets specific requirements. Unlike most approaches to feedback scheduling, where a controller integrates a "basic" scheduling algorithm and dynamically tunes its parameters and hence its performances, our methodology essentially reduces the design of a scheduling algorithm to the synthesis of a controller that closes the feedback loop. This approach allows the re-use of control-theoretical techniques to design efficient scheduling algorithms; it frames and solves the scheduling problem in a general setting; and it can naturally tackle certain peculiar requirements such as robustness and dynamic performance tuning. A few experiments demonstrate the feasibility of the approach on a real-time benchmark.
|
In @cite_15 , the controller adjusts the reservation time to within an upper bound given by the most frequently activated task. The model of the plant is a continuous-time system whose variables record the queuing time of tasks. The effectiveness of the method proposed is validated through simulations.
|
{
"cite_N": [
"@cite_15"
],
"mid": [
"2161706017"
],
"abstract": [
"This paper presents an approach to adaptive CPU scheduling for dynamic real-time systems using control-theoretic methods. Mathematical models are developed that form the basis for the design of controllers that optimize a certain performance measure. Simulation results are presented to show the efficacy of our approach."
]
}
|
1009.3455
|
1652577720
|
This paper presents a novel methodology to develop scheduling algorithms. The scheduling problem is phrased as a control problem, and control-theoretical techniques are used to design a scheduling algorithm that meets specific requirements. Unlike most approaches to feedback scheduling, where a controller integrates a "basic" scheduling algorithm and dynamically tunes its parameters and hence its performances, our methodology essentially reduces the design of a scheduling algorithm to the synthesis of a controller that closes the feedback loop. This approach allows the re-use of control-theoretical techniques to design efficient scheduling algorithms; it frames and solves the scheduling problem in a general setting; and it can naturally tackle certain peculiar requirements such as robustness and dynamic performance tuning. A few experiments demonstrate the feasibility of the approach on a real-time benchmark.
|
in @cite_12 consider some basic scheduling policies (both open-loop and closed-loop) and design a controller that prevents system overloading. Such goal is achieved by letting some tasks out of the queue when the system workload is too high.
|
{
"cite_N": [
"@cite_12"
],
"mid": [
"2128838486"
],
"abstract": [
"This paper presents a feedback control real-time scheduling (FCS) framework for adaptive real-time systems. An advantage of the FCS framework is its use of feedback control theory (rather than ad hoc solutions) as a scientific underpinning. We apply a control theory based methodology to systematically design FCS algorithms to satisfy the transient and steady state performance specifications of real-time systems. In particular, we establish dynamic models of real-time systems and develop performance analyses of FCS algorithms, which are major challenges and key steps for the design of control theory based adaptive real-time systems. We also present a FCS architecture that allows plug-ins of different real-time scheduling policies and QoS optimization algorithms. Based on our framework, we identify different categories of real-time applications where different FCS algorithms should be applied. Performance evaluation results demonstrate that our analytically tuned FCS algorithms provide robust transient and steady state performance guarantees for periodic and aperiodic tasks even when the task execution times vary by as much as 100 from the initial estimate."
]
}
|
1009.3468
|
1616516000
|
In this paper, we consider the problem of modelling the average delay experienced by a packet in a single cell IEEE 802.11 DCF wireless local area network. The packet arrival process at each node i is assumed to be Poisson with rate parameter . Since the nodes are sharing a single channel, they have to contend with one another for a successful transmission. The mean delay for a packet has been approximated by modelling the system as a 1-limited Random Polling system with zero switchover time. We show that even for non-homogeneous packet arrival processes, the mean delay of packets across the queues are same and depends on the system utilization factor and the aggregate throughput of the MAC. Extensive simulations are conducted to verify the analytical results.
|
Since the seminal paper by Bianchi @cite_0 , throughput analysis of IEEE 802.11 DCF has come under much scrutiny. In @cite_0 , the author evaluates the aggregate system throughput as a function of the number of nodes under saturation, i.e., when each user has a packet to transmit at all times. The main feature of the analysis is the 2-dimensional Markov model, which captures the back-off phenomenon of IEEE 802.11, given a transmission attempt rate for each node. Due to the robustness and simplicity of the model, it has been used extensively by various researchers. In @cite_9 , the authors give an analytical model for throughput analysis of DCF using average back-off state as compared to the Markovian model being proposed by Bianchi. Although the approaches are different, the end numerical results are close to each other. In @cite_1 , the authors study the fixed point solution and performance measure in a more generalized framework.
|
{
"cite_N": [
"@cite_0",
"@cite_9",
"@cite_1"
],
"mid": [
"2162598825",
"1490241023",
"2166657206"
],
"abstract": [
"The IEEE has standardized the 802.11 protocol for wireless local area networks. The primary medium access control (MAC) technique of 802.11 is called the distributed coordination function (DCF). The DCF is a carrier sense multiple access with collision avoidance (CSMA CA) scheme with binary slotted exponential backoff. This paper provides a simple, but nevertheless extremely accurate, analytical model to compute the 802.11 DCF throughput, in the assumption of finite number of terminals and ideal channel conditions. The proposed analysis applies to both the packet transmission schemes employed by DCF, namely, the basic access and the RTS CTS access mechanisms. In addition, it also applies to a combination of the two schemes, in which packets longer than a given threshold are transmitted according to the RTS CTS mechanism. By means of the proposed model, we provide an extensive throughput performance evaluation of both access mechanisms of the 802.11 protocol.",
"The IEEE 802.11 MAC protocol provides shared access to a wireless channel. This paper uses an analytic model to study the channel capacity - i.e., maximum throughput - when using the basic access (two-way handshaking) method in this protocol. It provides closed-form approximations for the probability of collision p, the maximum throughput S and the limit on the number of stations in a wireless cell. The analysis also shows that: p does not depend on the packet length, the latency in crossing the MAC and physical layers, the acknowledgment timeout, the interframe spaces and the slot size; p and S (and other performance measures) depend on the minimum window size W and the number of stations n only through a gap g D W=.n 1 - consequently, halving W is like doubling n ;t he maximum contention window size has minimal effect onp and S; the choice of W that maximizes S is proportional to the square root of the packet length; S is maximum when transmission rate (including collisions) equals the reciprocal of transmission time, and this happens when channel wastage due to collisions balances idle bandwidth caused by backoffs. The results suggest guidelines on when and howW can be adjusted to suit measured traffic, thus making the protocol adaptive.",
"We study a fixed-point formalization of the well-known analysis of Bianchi. We provide a significant simplification and generalization of the analysis. In this more general framework, the fixed-point solution and performance measures resulting from it are studied. Uniqueness of the fixed point is established. Simple and general throughput formulas are provided. It is shown that the throughput of any flow will be bounded by the one with the smallest transmission rate. The aggregate throughput is bounded by the reciprocal of the harmonic mean of the transmission rates. In an asymptotic regime with a large number of nodes, explicit formulas for the collision probability, the aggregate attempt rate, and the aggregate throughput are provided. The results from the analysis are compared with ns2 simulations and also with an exact Markov model of the backoff process. It is shown how the saturated network analysis can be used to obtain TCP transfer throughputs in some cases."
]
}
|
1009.3468
|
1616516000
|
In this paper, we consider the problem of modelling the average delay experienced by a packet in a single cell IEEE 802.11 DCF wireless local area network. The packet arrival process at each node i is assumed to be Poisson with rate parameter . Since the nodes are sharing a single channel, they have to contend with one another for a successful transmission. The mean delay for a packet has been approximated by modelling the system as a 1-limited Random Polling system with zero switchover time. We show that even for non-homogeneous packet arrival processes, the mean delay of packets across the queues are same and depends on the system utilization factor and the aggregate throughput of the MAC. Extensive simulations are conducted to verify the analytical results.
|
Delay analysis of IEEE 802.11 DCF is limited in comparison to the throughput studies. In @cite_5 , the authors present delay analysis of an HOL packet for the saturated scenario. In @cite_4 and @cite_7 , the authors extended the model of Tay and Chua @cite_9 by proposing G G 1 queues for each individual user. However, the analysis ignores the random delays due to packet transmissions by other users. The authors arrive at an expression for unsaturated collision probability using fixed point analysis and use the same in their subsequent modelling.
|
{
"cite_N": [
"@cite_5",
"@cite_9",
"@cite_4",
"@cite_7"
],
"mid": [
"2170093429",
"1490241023",
"2101280440",
"2163707289"
],
"abstract": [
"This paper presents an analytical model to compute the average service time and jitter experienced by a packet when transmitted in a saturated IEEE 802.11 ad hoc network. In contrast to traditional work in the literature, in which a distribution is usually fitted or assumed, we use a bottom-up approach and build the first two moments of the service time based on the IEEE 802.11 binary exponential backoff algorithm and the events underneath its operation. Our model is general enough to be applied to any type of IEEE 802.11 wireless ad hoc network where the channel state probabilities driving a node's backoff operation are known. We apply our model to saturated single-hop ad hoc networks under ideal channel conditions. We validate our model through extensive simulations and conduct a performance evaluation of a node's average service time and jitter for both direct sequence and frequency-hopping spread spectrum physical layers.",
"The IEEE 802.11 MAC protocol provides shared access to a wireless channel. This paper uses an analytic model to study the channel capacity - i.e., maximum throughput - when using the basic access (two-way handshaking) method in this protocol. It provides closed-form approximations for the probability of collision p, the maximum throughput S and the limit on the number of stations in a wireless cell. The analysis also shows that: p does not depend on the packet length, the latency in crossing the MAC and physical layers, the acknowledgment timeout, the interframe spaces and the slot size; p and S (and other performance measures) depend on the minimum window size W and the number of stations n only through a gap g D W=.n 1 - consequently, halving W is like doubling n ;t he maximum contention window size has minimal effect onp and S; the choice of W that maximizes S is proportional to the square root of the packet length; S is maximum when transmission rate (including collisions) equals the reciprocal of transmission time, and this happens when channel wastage due to collisions balances idle bandwidth caused by backoffs. The results suggest guidelines on when and howW can be adjusted to suit measured traffic, thus making the protocol adaptive.",
"We present an analytic model for evaluating the queueing delays at nodes in an IEEE 802.11 MAC based wireless network. The model can account for arbitrary arrival patterns, packet size distributions and number of nodes. Our model gives closed form expressions for obtaining the delay and queue length characteristics. We model each node as a discrete time G G 1 queue and derive the service time distribution while accounting for a number of factors including the channel access delay due to the shared medium, impact of packet collisions, the resulting backoffs as well as the packet size distribution. The model is also extended for ongoing proposals under consideration for 802.11e wherein a number of packets may be transmitted in a burst once the channel is accessed. Our analytical results are verified through extensive simulations. The results of our model can also be used for providing probabilistic quality of service guarantees and determining the number of nodes that can be accommodated while satisfying a given delay constraint.",
"This paper presents an analytic model for evaluating the MAC layer queueing delays at wireless nodes using the distributed coordination function of IEEE 802.11 MAC specifications. Our model is valid for finite loads and can account for arbitrary arrival patterns, packet size distributions and number of nodes. Each node is modeled as a discrete time G G 1 queue and we obtain closed form expressions for the delay and queue length characteristics at each node. We derive the service time distribution for the packets at each node while accounting for a number of factors including the channel access delay due to the shared medium, impact of packet collisions, the resulting backoffs as well as the packet size distribution. Our analytical results are verified through extensive simulations and are more accurate than existing models."
]
}
|
1009.3468
|
1616516000
|
In this paper, we consider the problem of modelling the average delay experienced by a packet in a single cell IEEE 802.11 DCF wireless local area network. The packet arrival process at each node i is assumed to be Poisson with rate parameter . Since the nodes are sharing a single channel, they have to contend with one another for a successful transmission. The mean delay for a packet has been approximated by modelling the system as a 1-limited Random Polling system with zero switchover time. We show that even for non-homogeneous packet arrival processes, the mean delay of packets across the queues are same and depends on the system utilization factor and the aggregate throughput of the MAC. Extensive simulations are conducted to verify the analytical results.
|
In @cite_2 , the authors propose System Centric and User Centric Queuing Models for IEEE 802.11 based Wireless LANs. @cite_2 assumes the server to allocate its resources to users in a round robin manner. In the System Centric Model, the arrivals are assumed to be Poisson, thus the resource sharing model takes the form of an M G 1 PS system with the mean delay being the same as that in an equivalent M M 1 system. In the User Centric Model, each user queue is modeled as a separate G G 1 queue.
|
{
"cite_N": [
"@cite_2"
],
"mid": [
"2107403026"
],
"abstract": [
"We consider the following two views of an IEEE 802.11 based Wireless LAN: (i) as seen by the WLAN medium and (ii) as seen by a user. In the system centric view, we model the WLAN medium as a server that allocates its resources to users in a round Robin manner. This resource sharing model not only provides a simple model for the system, it also enables us to derive the channel service rate and the total delay incurred in transmitting a packet. For Poisson arrivals, the resource sharing model takes the form of an M G 1 PS system with the mean delay being the same as that in an equivalent M M 1 system. We then take a user centric view and model each user's queue as a separate G G 1 queue. We derive the probability distributions for the different delay sources, i.e., random back-off time, random number of collisions and random number of successful transmissions from other users. This user centric model can provide insights into understanding access and queuing delays in 802.11 DCF. Finally, we discuss the utility of these models for functions such as capacity analysis, admission control and QoS enforcement."
]
}
|
1009.3468
|
1616516000
|
In this paper, we consider the problem of modelling the average delay experienced by a packet in a single cell IEEE 802.11 DCF wireless local area network. The packet arrival process at each node i is assumed to be Poisson with rate parameter . Since the nodes are sharing a single channel, they have to contend with one another for a successful transmission. The mean delay for a packet has been approximated by modelling the system as a 1-limited Random Polling system with zero switchover time. We show that even for non-homogeneous packet arrival processes, the mean delay of packets across the queues are same and depends on the system utilization factor and the aggregate throughput of the MAC. Extensive simulations are conducted to verify the analytical results.
|
A novel model based on diffusion approximations has been used to model delay in ad-hoc networks by Bisnik and Abouzeid @cite_3 . The authors provide scaling laws for delay under probabilistic routing, routing is oblivious to the origin and nature of the packet. In the paper, the authors consider the problem of characterizing average delay over various network deployments. But for a given network deployment, the value of the observed average delay may vary widely in comparison to the value as calculated using the diffusion approximation model. We are interested in a simple model to obtain average delay for a given mesh network as against the average over many random deployments @cite_3 .
|
{
"cite_N": [
"@cite_3"
],
"mid": [
"2119153006"
],
"abstract": [
"In this paper we focus on characterizing the average end-to-end delay and maximum achievable per-node throughput in random access multihop wireless ad hoc networks with stationary nodes. We present an analytical model that takes into account the number of nodes, the random packet arrival process, the extent of locality of traffic, and the back off and collision avoidance mechanisms of random access MAC. We model random access multihop wireless networks as open G G 1 queuing networks and use the diffusion approximation to evaluate closed form expressions for the average end-to-end delay. The mean service time of nodes is derived and used to obtain the maximum achievable per-node throughput. The analytical results obtained here from the queuing network analysis are discussed with regard to similarities and differences from the well established information-theoretic results on throughput and delay scaling laws in ad hoc networks. We perform extensive simulations and verify that the analytical results closely match the results obtained from simulations."
]
}
|
1009.3468
|
1616516000
|
In this paper, we consider the problem of modelling the average delay experienced by a packet in a single cell IEEE 802.11 DCF wireless local area network. The packet arrival process at each node i is assumed to be Poisson with rate parameter . Since the nodes are sharing a single channel, they have to contend with one another for a successful transmission. The mean delay for a packet has been approximated by modelling the system as a 1-limited Random Polling system with zero switchover time. We show that even for non-homogeneous packet arrival processes, the mean delay of packets across the queues are same and depends on the system utilization factor and the aggregate throughput of the MAC. Extensive simulations are conducted to verify the analytical results.
|
In @cite_6 , the authors provide an analysis of the coupled queue process by studying a lower dimensional process and by introducing a certain conditional independence approximation. But the authors provide an analytical framework to model the delay for the case of homogeneous Poisson arrivals only. In @cite_11 , we have analyzed the mean delay for single hop wireless mesh networks under light aggregate traffic. In @cite_11 , assuming constant throughput, we model the system as decoupled queues which receive the same share of the aggregate throughput. We derive a simple closed form expression for the upper bound of delay under homogeneous Poisson packet arrival. We have also describe briefly, the approach to model delay in case of non-homogeneous Poisson process under light to moderate load regime. Through simulations, we show that the computed mean delay and simulated values are close to each other under light aggregate load. But as load increases, interactions between the queues appear and our modelling assumption ceases to be valid.
|
{
"cite_N": [
"@cite_6",
"@cite_11"
],
"mid": [
"2033020629",
"2097573247"
],
"abstract": [
"Analytical models of IEEE 802.11-based WLANs are invariably based on approximations, such as the well-known mean-field approximations proposed by Bianchi for saturated nodes. In this paper, we provide a new approach for modeling the situation when the nodes are not saturated. We study a State Dependent Attempt Rate (SDAR) approximation to model M queues (one queue per node) served by the CSMA CA protocol as standardized in the IEEE 802.11 DCF. The approximation is that, when n of the M queues are non-empty, the attempt probability of the n non-empty nodes is given by the long-term attempt probability of n saturated nodes as provided by Bianchi's model. This yields a coupled queue system. When packets arrive to the M queues according to independent Poisson processes, we provide an exact model for the coupled queue system with SDAR service. The main contribution of this paper is to provide an analysis of the coupled queue process by studying a lower dimensional process and by introducing a certain conditional independence approximation. We show that the numerical results obtained from our finite buffer analysis are in excellent agreement with the corresponding results obtained from ns-2 simulations. We replace the CSMA CA protocol as implemented in the ns-2 simulator with the SDAR service model to show that the SDAR approximation provides an accurate model for the CSMA CA protocol. We also report the simulation speed-ups thus obtained by our model-based simulation.",
"In this paper, we consider the problem of modelling the average delay in an IEEE 802.11 DCF wireless mesh network with a single root node under light traffic. We derive expression for mean delay for a co-located wireless mesh network, when packet generation is homogeneous Poisson process with rate λ. We also show how our analysis can be extended for non-homogeneous Poisson packet generation. We model mean delay by decoupling queues into independent M M 1 queues. Extensive simulations are conducted to verify the analytical results."
]
}
|
1009.3468
|
1616516000
|
In this paper, we consider the problem of modelling the average delay experienced by a packet in a single cell IEEE 802.11 DCF wireless local area network. The packet arrival process at each node i is assumed to be Poisson with rate parameter . Since the nodes are sharing a single channel, they have to contend with one another for a successful transmission. The mean delay for a packet has been approximated by modelling the system as a 1-limited Random Polling system with zero switchover time. We show that even for non-homogeneous packet arrival processes, the mean delay of packets across the queues are same and depends on the system utilization factor and the aggregate throughput of the MAC. Extensive simulations are conducted to verify the analytical results.
|
In this paper, we model the system as a 1-limited random polling system with zero switchover time. This enables us to use the mean delay expressions from @cite_10 to analyze the delay in a single cell wireless local area network. We remark that the user traffic delay is not merely the Head-Of-Line (HOL) packet delay that has been analyzed in @cite_0 , @cite_9 and @cite_1 ; it includes the delay from the time a user packet arrives at the queue, till the packet leaves the node. Thus, both queuing delay and HOL delay are included.
|
{
"cite_N": [
"@cite_0",
"@cite_9",
"@cite_1",
"@cite_10"
],
"mid": [
"2162598825",
"1490241023",
"2166657206",
"2131784952"
],
"abstract": [
"The IEEE has standardized the 802.11 protocol for wireless local area networks. The primary medium access control (MAC) technique of 802.11 is called the distributed coordination function (DCF). The DCF is a carrier sense multiple access with collision avoidance (CSMA CA) scheme with binary slotted exponential backoff. This paper provides a simple, but nevertheless extremely accurate, analytical model to compute the 802.11 DCF throughput, in the assumption of finite number of terminals and ideal channel conditions. The proposed analysis applies to both the packet transmission schemes employed by DCF, namely, the basic access and the RTS CTS access mechanisms. In addition, it also applies to a combination of the two schemes, in which packets longer than a given threshold are transmitted according to the RTS CTS mechanism. By means of the proposed model, we provide an extensive throughput performance evaluation of both access mechanisms of the 802.11 protocol.",
"The IEEE 802.11 MAC protocol provides shared access to a wireless channel. This paper uses an analytic model to study the channel capacity - i.e., maximum throughput - when using the basic access (two-way handshaking) method in this protocol. It provides closed-form approximations for the probability of collision p, the maximum throughput S and the limit on the number of stations in a wireless cell. The analysis also shows that: p does not depend on the packet length, the latency in crossing the MAC and physical layers, the acknowledgment timeout, the interframe spaces and the slot size; p and S (and other performance measures) depend on the minimum window size W and the number of stations n only through a gap g D W=.n 1 - consequently, halving W is like doubling n ;t he maximum contention window size has minimal effect onp and S; the choice of W that maximizes S is proportional to the square root of the packet length; S is maximum when transmission rate (including collisions) equals the reciprocal of transmission time, and this happens when channel wastage due to collisions balances idle bandwidth caused by backoffs. The results suggest guidelines on when and howW can be adjusted to suit measured traffic, thus making the protocol adaptive.",
"We study a fixed-point formalization of the well-known analysis of Bianchi. We provide a significant simplification and generalization of the analysis. In this more general framework, the fixed-point solution and performance measures resulting from it are studied. Uniqueness of the fixed point is established. Simple and general throughput formulas are provided. It is shown that the throughput of any flow will be bounded by the one with the smallest transmission rate. The aggregate throughput is bounded by the reciprocal of the harmonic mean of the transmission rates. In an asymptotic regime with a large number of nodes, explicit formulas for the collision probability, the aggregate attempt rate, and the aggregate throughput are provided. The results from the analysis are compared with ns2 simulations and also with an exact Markov model of the backoff process. It is shown how the saturated network analysis can be used to obtain TCP transfer throughputs in some cases.",
"We introduce a simple approach for modeling and analyzing the random polling system with asymmetric arrival rates, service times, and switchover times. It is assumed that the customer arrival processes at all queues are correlated Levy input processes. Two classes of service disciplines, random gated and 1-limited, are considered. The random gated service discipline generalizes several known service disciplines. We obtain explicit expressions for several performance measures of the system. These performance measures include the mean and second moment of the cycle time, the queue length at the beginning of a cycle of service and the expected delay observed by a customer. For the special case of independent Poisson input processes at all queues, we also provide new proof of several well-known pseudo-conservation laws."
]
}
|
1009.3468
|
1616516000
|
In this paper, we consider the problem of modelling the average delay experienced by a packet in a single cell IEEE 802.11 DCF wireless local area network. The packet arrival process at each node i is assumed to be Poisson with rate parameter . Since the nodes are sharing a single channel, they have to contend with one another for a successful transmission. The mean delay for a packet has been approximated by modelling the system as a 1-limited Random Polling system with zero switchover time. We show that even for non-homogeneous packet arrival processes, the mean delay of packets across the queues are same and depends on the system utilization factor and the aggregate throughput of the MAC. Extensive simulations are conducted to verify the analytical results.
|
Our objective is to explore the use of known results for the network and in analyzing the mean delay experienced by a packet. We propose a random polling system framework to analyze mean delay in a single cell wireless local area network. We obtain closed form expressions for mean delay by applying results from @cite_10 to our random polling system framework. We show though simulations that our random polling framework can be used to estimate the mean delay in a single cell IEEE 802.11 wireless local area network in the entire capacity region.
|
{
"cite_N": [
"@cite_10"
],
"mid": [
"2131784952"
],
"abstract": [
"We introduce a simple approach for modeling and analyzing the random polling system with asymmetric arrival rates, service times, and switchover times. It is assumed that the customer arrival processes at all queues are correlated Levy input processes. Two classes of service disciplines, random gated and 1-limited, are considered. The random gated service discipline generalizes several known service disciplines. We obtain explicit expressions for several performance measures of the system. These performance measures include the mean and second moment of the cycle time, the queue length at the beginning of a cycle of service and the expected delay observed by a customer. For the special case of independent Poisson input processes at all queues, we also provide new proof of several well-known pseudo-conservation laws."
]
}
|
1009.3088
|
1588943683
|
Mobile applications are becoming increasingly ubiquitous and provide ever richer functionality on mobile devices. At the same time, such devices often enjoy strong connectivity with more powerful machines ranging from laptops and desktops to commercial clouds. This paper presents the design and implementation of CloneCloud, a system that automatically transforms mobile applications to benefit from the cloud. The system is a flexible application partitioner and execution runtime that enables unmodified mobile applications running in an application-level virtual machine to seamlessly off-load part of their execution from mobile devices onto device clones operating in a computational cloud. CloneCloud uses a combination of static analysis and dynamic profiling to optimally and automatically partition an application so that it migrates, executes in the cloud, and re-integrates computation in a fine-grained manner that makes efficient use of resources. Our evaluation shows that CloneCloud can achieve up to 21.2x speedup of smartphone applications we tested and it allows different partitioning for different inputs and networks.
|
Remote execution of resource-intensive applications for resource-poor hardware is a well-known approach in mobile pervasive computing. All remote execution work carefully designs and pre-partitions applications between local and remote execution. Typical remote execution systems run a simple visual, audio output routine at the mobile device and computation-intensive jobs at a remote server @cite_6 @cite_16 @cite_34 @cite_14 @cite_3 @cite_36 . @cite_16 and Flinn and Satyanarayanan @cite_34 explore saving power via remote execution. Cyber foraging @cite_36 @cite_25 uses surrogates (untrusted and unmanaged public machines) opportunistically to improve the performance of mobile devices. For example, both data staging @cite_19 and Slingshot @cite_17 use surrogates. In particular, Slingshot creates a secondary replica of a home server at nearby surrogates. ISR @cite_27 provides the ability to suspend on one machine and resume on another machine by storing virtual machine (e.g., Xen) images in a distributed storage system.
|
{
"cite_N": [
"@cite_14",
"@cite_36",
"@cite_16",
"@cite_3",
"@cite_6",
"@cite_19",
"@cite_27",
"@cite_34",
"@cite_25",
"@cite_17"
],
"mid": [
"2115457547",
"2040099976",
"2071875983",
"2144086893",
"",
"",
"2029107519",
"2145695845",
"2061232032",
"1999519575"
],
"abstract": [
"Pervasive computing creates environments saturated with computing and communication capability, yet gracefully integrated with human users. Remote execution has a natural role to play, in such environments, since it lets applications simultaneously leverage the mobility of small devices and the greater resources of large devices. In this paper, we describe Spectra, a remote execution system designed for pervasive environments. Spectra monitors resources such as battery, energy and file cache state which are especially important for mobile clients. It also dynamically balances energy use and quality goals with traditional performance concerns to decide where to locate functionality. Finally, Spectra is self-tuning-it does not require applications to explicitly specify intended resource usage. Instead, it monitors application behavior, learns functions predicting their resource usage, and uses the information to anticipate future behavior.",
"In this paper, we propose cyber foraging: a mechanism to augment the computational and storage capabilities of mobile devices. Cyber foraging uses opportunistically discovered servers in the environment to improve the performance of interactive applications and distributed file systems on mobile clients. We show how the performance of distributed file systems can be improved by staging data at these servers even though the servers are not trusted. We also show how the performance of interactive applications can be improved via remote execution. Finally, we present VERSUDS: a virtual interface to heteregeneous service discovery protocols that can be used to discover these servers.",
"We describe a new approach to power saving and battery life extension on an untethered laptop through wireless remote processing of power-costly tasks. We ran a series of experiments comparing the power consumption of processes run locally with that of the same processes run remotely. We examined the trade-off between communication power expenditures and the power cost of local processing. This paper describes our methodology and results of our experiments. We suggest ways to further improve this approach, and outline a software design to support remote process execution.",
"Remote access feels different from local access. The major issues are consistency (machines vary in GUIs, applications, and devices) and responsiveness (the user must wait for network and server delays), Protium attacks these by partitioning programs into local viewers that connect to remote services using application-specific protocols. Partitioning allows viewers to be customized to adapt to local features and limitations. Services are responsible for maintaining long-term state. Viewers manage the user interface and use state to reduce communication between viewer and service, reducing latency whenever possible. System infrastructure sits between the viewer and service, supporting replication, consistency, session management, and multiple simultaneous viewers. The prototype system includes an editor, a draw program, a PDF viewer, a map database, a music jukebox, and windowing system support. It runs on servers, workstations, PCs, and PDAs under Plan 9, Linux, and Windows; services and viewers have been written in C, Java, and Concurrent ML.",
"",
"",
"Preserving one's uniquely customized computing environment as one moves to different locations is an enduring challenge in mobile computing. We examine why this capability is valued so highly, and what makes it so difficult to achieve for personal computing applications. We describe a new mechanism called Internet Suspend Resume (ISR) that overcomes many of the limitations of previous approaches to realizing this capability. ISR enables a hands-free approach to mobile computing that appears well suited to future pervasive computing environments in which commodity hardware may be widely deployed for transient use. We show that ISR can be implemented by layering virtual machine technology on distributed file system technology. We also report on measurements from a prototype that confirm that ISR is already usable today for some common usage scenarios.",
"In this paper, we demonstrate that a collaborative relationship between the operating system and applications can be used to meet user-specified goals for battery duration. We first show how applications can dynamically modify their behavior to conserve energy. We then show how the Linux operating system can guide such adaptation to yield a battery-life of desired duration. By monitoring energy supply and demand, it is able to select the correct tradeoff between energy conservation and application quality. Our evaluation shows that this approach can meet goals that extend battery life by as much as 30 .",
"Cyber foraging is the transient and opportunistic use of compute servers bymobile devices. The short market life of such devices makes rapid modification of applications for remote execution an important problem. We describe a solution that combines a \"little language\" for cyber foraging with an adaptive runtime system. We report results from a user study showing that even novice developers are able to successfully modify large, unfamiliar applications in just a few hours. We also show that the quality of novice-modified and expert-modified applications are comparable in most cases.",
"Given a sufficiently good network connection, even a handheld computer can run extremely resource-intensive applications by executing the demanding portions on a remote server. At first glance, the increasingly ubiquitous deployment of wireless hotspots seems to offer the connectivity needed for remote execution. However, we show that the backhaul connection from the hotspot to the Internet can be a prohibitive bottleneck for interactive applications. To eliminate this bottleneck, we propose a new architecture, called Slingshot, that replicates remote application state on surrogate computers co-located with wireless access points. The first-class replica of each application executes on a remote server owned by the handheld user; this offers a safe haven for application state in the event of surrogate failure. Slingshot deploys second-class replicas on nearby surrogates to improve application response time. A proxy on the handheld broadcasts each application request to all replicas and returns the first response it receives. We have modified a speech recognizer and a remote desktop to use Slingshot. Our results show that these applications execute 2.6 times faster with Slingshot than with remote execution."
]
}
|
1009.3088
|
1588943683
|
Mobile applications are becoming increasingly ubiquitous and provide ever richer functionality on mobile devices. At the same time, such devices often enjoy strong connectivity with more powerful machines ranging from laptops and desktops to commercial clouds. This paper presents the design and implementation of CloneCloud, a system that automatically transforms mobile applications to benefit from the cloud. The system is a flexible application partitioner and execution runtime that enables unmodified mobile applications running in an application-level virtual machine to seamlessly off-load part of their execution from mobile devices onto device clones operating in a computational cloud. CloneCloud uses a combination of static analysis and dynamic profiling to optimally and automatically partition an application so that it migrates, executes in the cloud, and re-integrates computation in a fine-grained manner that makes efficient use of resources. Our evaluation shows that CloneCloud can achieve up to 21.2x speedup of smartphone applications we tested and it allows different partitioning for different inputs and networks.
|
Finally, our work takes a step towards achieving the vision presented in an earlier workshop paper @cite_31 , where we made the case for augmented smartphone execution through clones running in the cloud. In this paper, we have presented the concrete design, implementation, and evaluation of our prototype system for such execution.
|
{
"cite_N": [
"@cite_31"
],
"mid": [
"1607437805"
],
"abstract": [
"Smartphones enable a new, rich user experience in pervasive computing, but their hardware is still very limited in terms of computation, memory, and energy reserves, thus limiting potential applications. In this paper, we propose a novel architecture that addresses these challenges via seamlessly--but partially--off-loading execution from the smartphone to a computational infrastructure hosting a cloud of smartphone clones. We outline new augmented execution opportunities for smartphones enabled by our CloneCloud architecture."
]
}
|
1009.2490
|
2761012467
|
In this work, we study position-based cryptography in the quantum setting. The aim is to use the geographical position of a party as its only credential. On the negative side, we show that if adversaries are allowed to share an arbitrarily large entangled quantum state, the task of secure position-verification is impossible. To this end, we prove the following very general result. Assume that Alice and Bob hold respectively subsystems @math and @math of a (possibly) unknown quantum state @math . Their goal is to calculate and share a new state @math , where @math is a fixed unitary operation. The question that we ask is how many rounds of mutual communication are needed. It is easy to achieve such a task using two rounds of classical communication, whereas, in general, it is impossible with no communication at all. Surprisingly, in case Alice and Bob share enough entanglement to start with and we allow an arbitrarily small failure probability,...
|
Concurrent and independent of our work and the work on quantum tagging described above, the approach of using quantum techniques for secure position-verification was proposed by Malaney @cite_1 @cite_17 . However, the proposed scheme is merely claimed secure, and no rigorous security analysis is provided. As pointed out in @cite_19 , Malaney's schemes can also be broken by a teleportation-based attack. @ have proposed and proved secure a quantum scheme for position-verification @cite_33 . However, their proof implicitly assumed that the adversaries have no pre-shared entanglement; as shown in @cite_19 , their scheme also becomes insecure without this assumption.
|
{
"cite_N": [
"@cite_19",
"@cite_1",
"@cite_33",
"@cite_17"
],
"mid": [
"1840167064",
"2124803813",
"145176944",
""
],
"abstract": [
"We define the task of quantum tagging, that is, authenticating the classical location of a classical tagging device by sending and receiving quantum signals from suitably located distant sites, in an environment controlled by an adversary whose quantum information processing and transmitting power is unbounded. We define simple security models for this task and briefly discuss alternatives. We illustrate the pitfalls of naive quantum cryptographic reasoning in this context by describing several protocols which at first sight appear unconditionally secure but which, as we show, can in fact be broken by teleportation-based attacks. We also describe some protocols which cannot be broken by these specific attacks, but do not prove they are unconditionally secure. We review the history of quantum tagging protocols, and show that protocols previously proposed by Malaney and are provably insecure.",
"The ability to unconditionally verify the location of a communication receiver would lead to a wide range of new security paradigms. However, it is known that unconditional location verification in classical communication systems is impossible. In this work we show how unconditional location verification can be achieved with the use of quantum communication channels. Our verification remains unconditional irrespective of the number of receivers, computational capacity, or any other physical resource held by an adversary. Quantum location verification represents an application of quantum entanglement that delivers a feat not possible in the classical-only channel. It gives us the ability to deliver real-time communications viable only at specified geographical coordinates.",
"We consider what constitutes identities in cryptography. Typical examples include your name and your social-security number, or your fingerprint iris-scan, or your address, or your (non-revoked) public-key coming from some trusted public-key infrastructure. In many situations, however, where you are defines your identity. For example, we know the role of a bank-teller behind a bullet-proof bank window not because she shows us her credentials but by merely knowing her location. In this paper, we initiate the study of cryptographic protocols where the identity (or other credentials and inputs) of a party are derived from its geographic location. We start by considering the central task in this setting, i.e., securely verifying the position of a device. Despite much work in this area, we show that in the Vanilla (or standard) model, the above task (i.e., of secure positioning) is impossible to achieve. In light of the above impossibility result, we then turn to the Bounded Storage Model and formalize and construct information theoretically secure protocols for two fundamental tasks: Secure Positioning; and Position Based Key Exchange. We then show that these tasks are in fact universal in this setting --- we show how we can use them to realize Secure Multi-Party Computation.Our main contribution in this paper is threefold: to place the problem of secure positioning on a sound theoretical footing; to prove a strong impossibility result that simultaneously shows the insecurity of previous attempts at the problem; and to present positive results by showing that the bounded-storage framework is, in fact, one of the \"right\" frameworks (there may be others) to study the foundations of position-based cryptography.",
""
]
}
|
1009.2490
|
2761012467
|
In this work, we study position-based cryptography in the quantum setting. The aim is to use the geographical position of a party as its only credential. On the negative side, we show that if adversaries are allowed to share an arbitrarily large entangled quantum state, the task of secure position-verification is impossible. To this end, we prove the following very general result. Assume that Alice and Bob hold respectively subsystems @math and @math of a (possibly) unknown quantum state @math . Their goal is to calculate and share a new state @math , where @math is a fixed unitary operation. The question that we ask is how many rounds of mutual communication are needed. It is easy to achieve such a task using two rounds of classical communication, whereas, in general, it is impossible with no communication at all. Surprisingly, in case Alice and Bob share enough entanglement to start with and we allow an arbitrarily small failure probability,...
|
In a subsequent paper @cite_4 , Lau and Lo use similar ideas as in @cite_19 to show the insecurity of position-verification schemes that are of a certain (yet rather restricted) form, which include the schemes from @cite_1 @cite_5 and @cite_33 . Furthermore, they propose a position-verification scheme that resists their attack, and they conjecture it secure. While these protocols might be secure if the adversaries do not pre-share entanglement, our attack shows that all of them are insecure in general.
|
{
"cite_N": [
"@cite_4",
"@cite_33",
"@cite_1",
"@cite_19",
"@cite_5"
],
"mid": [
"2095058629",
"145176944",
"2124803813",
"1840167064",
"2097079400"
],
"abstract": [
"Recently, position-based quantum cryptography has been claimed to be unconditionally secure. On the contrary, here we show that the existing proposals for position-based quantum cryptography are, in fact, insecure if entanglement is shared among two adversaries. Specifically, we demonstrate how the adversaries can incorporate ideas of quantum teleportation and quantum secret sharing to compromise the security with certainty. The common flaw to all current protocols is that the Pauli operators always map a codeword to a codeword (up to an irrelevant overall phase). We propose a modified scheme lacking this property in which the same cheating strategy used to undermine the previous protocols can succeed with a rate of at most 85 . We prove the modified protocol is secure when the shared quantum resource between the adversaries is a two- or three-level system.",
"We consider what constitutes identities in cryptography. Typical examples include your name and your social-security number, or your fingerprint iris-scan, or your address, or your (non-revoked) public-key coming from some trusted public-key infrastructure. In many situations, however, where you are defines your identity. For example, we know the role of a bank-teller behind a bullet-proof bank window not because she shows us her credentials but by merely knowing her location. In this paper, we initiate the study of cryptographic protocols where the identity (or other credentials and inputs) of a party are derived from its geographic location. We start by considering the central task in this setting, i.e., securely verifying the position of a device. Despite much work in this area, we show that in the Vanilla (or standard) model, the above task (i.e., of secure positioning) is impossible to achieve. In light of the above impossibility result, we then turn to the Bounded Storage Model and formalize and construct information theoretically secure protocols for two fundamental tasks: Secure Positioning; and Position Based Key Exchange. We then show that these tasks are in fact universal in this setting --- we show how we can use them to realize Secure Multi-Party Computation.Our main contribution in this paper is threefold: to place the problem of secure positioning on a sound theoretical footing; to prove a strong impossibility result that simultaneously shows the insecurity of previous attempts at the problem; and to present positive results by showing that the bounded-storage framework is, in fact, one of the \"right\" frameworks (there may be others) to study the foundations of position-based cryptography.",
"The ability to unconditionally verify the location of a communication receiver would lead to a wide range of new security paradigms. However, it is known that unconditional location verification in classical communication systems is impossible. In this work we show how unconditional location verification can be achieved with the use of quantum communication channels. Our verification remains unconditional irrespective of the number of receivers, computational capacity, or any other physical resource held by an adversary. Quantum location verification represents an application of quantum entanglement that delivers a feat not possible in the classical-only channel. It gives us the ability to deliver real-time communications viable only at specified geographical coordinates.",
"We define the task of quantum tagging, that is, authenticating the classical location of a classical tagging device by sending and receiving quantum signals from suitably located distant sites, in an environment controlled by an adversary whose quantum information processing and transmitting power is unbounded. We define simple security models for this task and briefly discuss alternatives. We illustrate the pitfalls of naive quantum cryptographic reasoning in this context by describing several protocols which at first sight appear unconditionally secure but which, as we show, can in fact be broken by teleportation-based attacks. We also describe some protocols which cannot be broken by these specific attacks, but do not prove they are unconditionally secure. We review the history of quantum tagging protocols, and show that protocols previously proposed by Malaney and are provably insecure.",
"We consider information-theoretic key agreement between two parties sharing somewhat different versions of a secret w that has relatively little entropy. Such key agreement, also known as information reconciliation and privacy amplification over unsecured channels, was shown to be theoretically feasible by Renner and Wolf (Eurocrypt 2004), although no protocol that runs in polynomial time was described. We propose a protocol that is not only polynomial-time, but actually practical, requiring only a few seconds on consumer-grade computers. Our protocol can be seen as an interactive version of robust fuzzy extractors (, Crypto 2006). While robust fuzzy extractors, due to their noninteractive nature, require w to have entropy at least half its length, we have no such constraint. In fact, unlike in prior solutions, in our solution the entropy loss is essentially unrelated to the length or the entropy of w , and depends only on the security parameter."
]
}
|
1009.2490
|
2761012467
|
In this work, we study position-based cryptography in the quantum setting. The aim is to use the geographical position of a party as its only credential. On the negative side, we show that if adversaries are allowed to share an arbitrarily large entangled quantum state, the task of secure position-verification is impossible. To this end, we prove the following very general result. Assume that Alice and Bob hold respectively subsystems @math and @math of a (possibly) unknown quantum state @math . Their goal is to calculate and share a new state @math , where @math is a fixed unitary operation. The question that we ask is how many rounds of mutual communication are needed. It is easy to achieve such a task using two rounds of classical communication, whereas, in general, it is impossible with no communication at all. Surprisingly, in case Alice and Bob share enough entanglement to start with and we allow an arbitrarily small failure probability,...
|
@cite_31 , show how to measure the distance between two parties by quantum cryptographic means so that only trusted people have access to the result. This is a different kind of problem than what we consider, and the techniques used there are not applicable in our setting.
|
{
"cite_N": [
"@cite_31"
],
"mid": [
"2103156275"
],
"abstract": [
"We present a system to measure the distance between two parties that allows only trusted people to access the result. The security of the protocol is guaranteed by the complementarity principle in quantum mechanics. The protocol can be realized with available technology, at least as a proof of principle experiment."
]
}
|
1009.0855
|
2060732645
|
The Takagi function τ: [0,1] → [0, 1] is a continuous non-differentiable function constructed by Takagi in 1903. The level sets L(y) = x : τ(x) = y of the Takagi function τ(x) are studied by introducing a notion of local level set into which level sets are partitioned. Local level sets are simple to analyze, reducing questions to understanding the relation of level sets to local level sets, which is more complicated. It is known that for a “generic” full Lebesgue measure set of ordinates y, the level sets are finite sets. In contrast, here it is shown for a “generic” full Lebesgue measure set of abscissas x, the level set L(τ(x)) is uncountable. An interesting singular monotone function is constructed associated to local level sets, and is used to show the expected number of local level sets at a random level y is exactly ( 3 2 ).
|
The Takagi function has self-affine properties, and there has been extensive study of various classes of self-affine functions. In particular, in the late 1980's Bertoin @cite_11 , @cite_22 studied the Hausdorff dimension of level sets of certain classes of self-affine functions; however his results do not cover the Takagi function.
|
{
"cite_N": [
"@cite_22",
"@cite_11"
],
"mid": [
"2000044321",
"2048567997"
],
"abstract": [
"We compute the Hausdorff dimension ofM x = s:X(s)=x and of for almost everyx andt in [0;1] for a class of self-affine functionsX.",
"We study the occupation measure of a class of self-affine functions in Kamae’s sense. As these functions are Jarnik functions, we give examples of Jarnik functions which are not (LT), answering negatively to a problem of Geman and Horowitz."
]
}
|
1009.0855
|
2060732645
|
The Takagi function τ: [0,1] → [0, 1] is a continuous non-differentiable function constructed by Takagi in 1903. The level sets L(y) = x : τ(x) = y of the Takagi function τ(x) are studied by introducing a notion of local level set into which level sets are partitioned. Local level sets are simple to analyze, reducing questions to understanding the relation of level sets to local level sets, which is more complicated. It is known that for a “generic” full Lebesgue measure set of ordinates y, the level sets are finite sets. In contrast, here it is shown for a “generic” full Lebesgue measure set of abscissas x, the level set L(τ(x)) is uncountable. An interesting singular monotone function is constructed associated to local level sets, and is used to show the expected number of local level sets at a random level y is exactly ( 3 2 ).
|
In @cite_7 we further analyze the structure of global level sets @math using local level sets. We give a new proof of a theorem of Buczolich @cite_13 showing that if one draws @math uniformly from @math , then with probability one the level set @math is a finite set; we improve on it by showing that the expected number of points in such a random" level set @math is infinite. We also complement this result by showing that the set of levels @math having a level set of positive Hausdorff dimension is large" in the sense that it has full Hausdorff dimension @math , although it is of Lebesgue measure @math .
|
{
"cite_N": [
"@cite_13",
"@cite_7"
],
"mid": [
"2060594575",
"2050162182"
],
"abstract": [
"One can define in a natural way irregular 1-sets on the graphs of several fractal functions, like Takagi’s function, Weierstrass-Cellerier type functions and the typical continuous function. These irregular 1-sets can be useful during the investigation of level-sets and occupation measures of these functions. For example, we see that for Takagi’s function and for certain Weierstrass-Cellerier functions the occupation measure is singular with respect to the Lebesgue measure and for almost every level the level set is finite.",
"The Takagi function : [0, 1] [0, 1] is a continuous non-differentiable function constructed by Takagi in 1903. This paper studies the level sets L(y) = x : (x) = y of the Takagi function (x). It shows that for a full Lebesgue measure set of ordinates y, these level sets are finite sets, but whose expected number of points is infinite. Complementing this, it shows that the set of ordinates y whose level set has positive Hausdorff dimension is itself a set of full Hausdorff dimension 1 (but Lebesgue measure zero). Finally it shows that the level sets have a nontrivial Hausdorff dimension spectrum. The results are obtained using a notion of \"local level set\" introduced in a previous paper, along with a singular measure parameterizing such sets."
]
}
|
1009.0855
|
2060732645
|
The Takagi function τ: [0,1] → [0, 1] is a continuous non-differentiable function constructed by Takagi in 1903. The level sets L(y) = x : τ(x) = y of the Takagi function τ(x) are studied by introducing a notion of local level set into which level sets are partitioned. Local level sets are simple to analyze, reducing questions to understanding the relation of level sets to local level sets, which is more complicated. It is known that for a “generic” full Lebesgue measure set of ordinates y, the level sets are finite sets. In contrast, here it is shown for a “generic” full Lebesgue measure set of abscissas x, the level set L(τ(x)) is uncountable. An interesting singular monotone function is constructed associated to local level sets, and is used to show the expected number of local level sets at a random level y is exactly ( 3 2 ).
|
Subsequent to this paper, Allaart @cite_23 @cite_24 obtains many further results on local level sets. He shows that dyadic ordinates @math have finite or countable level sets, and he determines information on cardinalities of finite level sets.
|
{
"cite_N": [
"@cite_24",
"@cite_23"
],
"mid": [
"1611433572",
"2119842257"
],
"abstract": [
"Let T be Takagi's continuous but nowhere-differentiable function. It is known that almost all level sets (with respect to Lebesgue measure on the range of T) are finite. We show that the most common cardinality of the level sets of T is two, and investigate in detail the set of ordinates y such that the level set at level y has precisely two elements. As a by-product, we obtain a simple iterative procedure for solving the equation T(x)=y. We show further that any positive even integer occurs as the cardinality of some level set, and investigate which cardinalities occur with positive probability if an ordinate y is chosen at random from the range of T. The key to the results is a system of set equations for the level sets, which are derived from the partial self-similarity of T. These set equations yield a system of linear relationships between the cardinalities of level sets at various levels, from which all the results of this paper flow.",
"Let T be Takagi’s continuous but nowhere-differentiable function. This paper considers the size of the level sets of T both from a probabilistic point of view and from the perspective of Baire category. We first give more elementary proofs of three recently published results. The first, due to Z. Buczolich, states that almost all level sets (with respect to Lebesgue measure on the range of T) are finite. The second, due to J. Lagarias and Z. Maddock, states that the average number of points in a level set is infinite. The third result, also due to Lagarias and Maddock, states that the average number of local level sets contained in a level set is 3 2. In the second part of the paper it is shown that, in contrast to the above results, the set of ordinates y with uncountably infinite level sets is residual, and a fairly explicit description of this set is given. The final result of the paper is an answer to a question of Lagarias and Maddock: it is shown that most level sets (in the sense of Baire category) contain infinitely many local level sets, and that a continuum of level sets even contain uncountably many local level sets."
]
}
|
1009.0855
|
2060732645
|
The Takagi function τ: [0,1] → [0, 1] is a continuous non-differentiable function constructed by Takagi in 1903. The level sets L(y) = x : τ(x) = y of the Takagi function τ(x) are studied by introducing a notion of local level set into which level sets are partitioned. Local level sets are simple to analyze, reducing questions to understanding the relation of level sets to local level sets, which is more complicated. It is known that for a “generic” full Lebesgue measure set of ordinates y, the level sets are finite sets. In contrast, here it is shown for a “generic” full Lebesgue measure set of abscissas x, the level set L(τ(x)) is uncountable. An interesting singular monotone function is constructed associated to local level sets, and is used to show the expected number of local level sets at a random level y is exactly ( 3 2 ).
|
Finally we remark that there has been much study of the non-differentiable nature of the Takagi function in various directions, see for example Allaart and Kawamura ( @cite_0 , @cite_18 ) and references therein. It is considered as an example in Tricot [Section 6] Tri97 .
|
{
"cite_N": [
"@cite_0",
"@cite_18"
],
"mid": [
"2110785107",
"2962856088"
],
"abstract": [
"We consider the functions @math defined as the @math th partial derivative of Lebesgue's singular function @math with respect to @math at @math . This sequence includes a multiple of the Takagi function as the case @math . We show that @math is continuous but nowhere differentiable for each @math , and determine the Holder order of @math . From this, we derive that the Hausdorff dimension of the graph of @math is one. Using a formula of Lomnicki and Ulam, we obtain an arithmetic expression for @math using the binary expansion of @math , and use this to find the sets of points where @math and @math take on their absolute maximum and minimum values. We show that these sets are topological Cantor sets. In addition, we characterize the sets of local maximum and minimum points of @math and @math .",
"Abstract Let T be Takagi's continuous but nowhere-differentiable function. Using a representation in terms of Rademacher series due to N. Kono [Acta Math. Hungar. 49 (1987) 315–324], we give a complete characterization of those points where T has a left-sided, right-sided, or two-sided infinite derivative. This characterization is illustrated by several examples. A consequence of the main result is that the sets of points where T ′ ( x ) = ± ∞ have Hausdorff dimension one. As a byproduct of the method of proof, some exact results concerning the modulus of continuity of T are also obtained."
]
}
|
1009.1555
|
2154201348
|
We present a new similarity measure tailored to posts in an online forum. Our measure takes into account all the available information about user interest and interaction --- the content of posts, the threads in the forum, and the author of the posts. We use this post similarity to build a similarity between users, based on principal coordinate analysis. This allows easy visualization of the user activity as well. Similarity between users has numerous applications, such as clustering or classification. We show that including the author of a post in the post similarity has a smoothing effect on principal coordinate projections. We demonstrate our method on real data drawn from an internal corporate forum, and compare our results to those given by a standard document classification method. We conclude our method gives a more detailed picture of both the local and global network structure.
|
Several previous studies aimed at network structure analysis highlighted areas related to characterizing and clustering users' behaviors, personal qualities, or interests. For example, in recommendation systems, collaborative filtering works towards this goal, with high profile applications including Netflix @cite_1 , Amazon.com @cite_9 and financial services @cite_4 . Other authors use a singular value decomposition @cite_1 @cite_2 or variants of @math -nearest neighbors @cite_1 @cite_8 to characterize user interest.
|
{
"cite_N": [
"@cite_4",
"@cite_8",
"@cite_9",
"@cite_1",
"@cite_2"
],
"mid": [
"1480376833",
"1832221731",
"74742851",
"2124029832",
"2099866409"
],
"abstract": [
"",
"Abstract : We investigate the use of dimensionality reduction to improve performance for a new class of data analysis software called \"recommender systems\" Recommender systems apply knowledge discovery techniques to the problem of making product recommendations during a live customer interaction. These systems are achieving widespread success in E-commerce nowadays, especially with the advent of the Internet. The tremendous growth of customers and products poses three key challenges for recommender systems in the E-commerce domain. These are: producing high quality recommendations, performing many recommendations per second for millions of customers and products, and achieving high coverage in the face of data sparsity. One successful recommender system technology is collaborative filtering, which works by matching customer preferences to other customers in making recommendations. Collaborative filtering has been shown to produce high quality recommendations, but the performance degrades with the number of customers and products. New recommender system technologies are needed that can quickly produce high quality recommendations, even for very largescale problems. This paper presents two different experiments where we have explored one technology called Singular Value Decomposition (SVD) to reduce the dimensionality of recommender system databases. Each experiment compares the quality of a recommender system using SVD with the quality of a recommender system using collaborative filtering. The first experiment compares the effectiveness of the two recommender systems at predicting consumer preferences based on a database of explicit ratings of products. The second experiment compares the effectiveness of the two recommender systems at producing Top-N lists based on a real-life customer purchase database from an E-Commerce site. Our experience suggests that SVD has the potential to meet many of the challenges of recommender systems, under certain conditions.",
"\"Witkeys\" are websites in China that form a rapidly growing web-based knowledge market. A user who posts a task also offers a small fee, and many other users submit their answers to compete. The Witkey sites fall in-between aspects of the now-defunct Google Answers (vetted experts answer questions for a fee) and Yahoo Answers (anyone can answer or ask a question). As such, these sites promise new possibilities for knowledge-sharing online communities, perhaps fostering the freelance marketplace of the future. In this paper, we investigate one of the biggest Witkey websites in China, Taskcn.com. In particular, we apply social network prestige measures to a novel construction of user and task networks based on competitive outcomes to discover the underlying properties of both users and tasks. Our results demonstrate the power of this approach: Our analysis allows us to infer relative expertise of the users and provides an understanding of the participation structure in Taskcn. The results suggest challenges and opportunities for this kind of knowledge sharing medium.",
"Recommendation systems make suggestions about artifacts to a user. For instance, they may predict whether a user would be interested in seeing a particular movie. Social recomendation methods collect ratings of artifacts from many individuals, and use nearest-neighbor techniques to make recommendations to a user concerning new artifacts. However, these methods do not use the significant amount of other information that is often available about the nature of each artifact - such as cast lists o r movie reviews, for example. This paper presents an inductive learning approach to recommendation that is able to use both ratings information and other forms of information about each artifact in predicting user preferences. We show that our method outperforms an existing social-filtering method in the domain of movie recommendations on a dataset of more than 45,000 movie ratings collected from a community of over 250 users.",
"Most of the existing approaches to collaborative filtering cannot handle very large data sets. In this paper we show how a class of two-layer undirected graphical models, called Restricted Boltzmann Machines (RBM's), can be used to model tabular data, such as user's ratings of movies. We present efficient learning and inference procedures for this class of models and demonstrate that RBM's can be successfully applied to the Netflix data set, containing over 100 million user movie ratings. We also show that RBM's slightly outperform carefully-tuned SVD models. When the predictions of multiple RBM models and multiple SVD models are linearly combined, we achieve an error rate that is well over 6 better than the score of Netflix's own system."
]
}
|
1009.0282
|
2055612548
|
This paper proposes a new notion of typical sequences on a wide class of abstract alphabets (so-called standard Borel spaces), which is based on approximations of memoryless sources by empirical distributions uniformly over a class of measurable “test functions.” In the finite-alphabet case, we can take all uniformly bounded functions and recover the usual notion of strong typicality (or typicality under the total variation distance). For a general alphabet, however, this function class turns out to be too large, and must be restricted. With this in mind, we define typicality with respect to any Glivenko-Cantelli function class (i.e., a function class that admits a Uniform Law of Large Numbers) and demonstrate its power by giving simple derivations of the fundamental limits on the achievable rates in several source coding scenarios, in which the relevant operational criteria pertain to reproducing empirical averages of a general-alphabet stationary memoryless source with respect to a suitable function class.
|
We also note that a restricted notion of typicality based on weak convergence was used by Kontoyiannis and Zamir @cite_9 in the context of universal vector quantization using entropy codes. The idea there is to consider sequences of increasing length, whose empirical distributions converge in the weak topology to the output distribution of an optimal test channel in a Shannon rate-distortion problem.
|
{
"cite_N": [
"@cite_9"
],
"mid": [
"2138605525"
],
"abstract": [
"We introduce a universal quantization scheme based on random coding, and we analyze its performance. This scheme consists of a source-independent random codebook (typically mismatched to the source distribution), followed by optimal entropy coding that is matched to the quantized codeword distribution. A single-letter formula is derived for the rate achieved by this scheme at a given distortion, in the limit of large codebook dimension. The rate reduction due to entropy coding is quantified, and it is shown that it can be arbitrarily large. In the special case of \"almost uniform\" codebooks (e.g., an independent and identically distributed (i.i.d.) Gaussian codebook with large variance) and difference distortion measures, a novel connection is drawn between the compression achieved by the present scheme and the performance of \"universal\" entropy-coded dithered lattice quantizers. This connection generalizes the \"half-a-bit\" bound on the redundancy of dithered lattice quantizers. Moreover, it demonstrates a strong notion of universality where a single \"almost uniform\" codebook is near optimal for any source and any difference distortion measure. The proofs are based on the fact that the limiting empirical distribution of the first matching codeword in a random codebook can be precisely identified. This is done using elaborate large deviations techniques, that allow the derivation of a new \"almost sure\" version of the conditional limit theorem."
]
}
|
1009.0448
|
2951672555
|
In this paper, we consider the problem of modelling the average delay in an IEEE 802.11 DCF wireless mesh network with a single root node under light traffic. We derive expression for mean delay for a co-located wireless mesh network, when packet generation is homogeneous Poisson process with rate . We also show how our analysis can be extended for non-homogeneous Poisson packet generation. We model mean delay by decoupling queues into independent M M 1 queues. Extensive simulations are conducted to verify the analytical results.
|
In @cite_3 , the authors have proposed System Centric and User Centric Queuing Models for IEEE 802.11 based Wireless LANs. @cite_3 assumes the server to allocate its resources to users in a round robin manner. In the System Centric Model, the arrivals are assumed to be Poisson, thus the resource sharing model takes the form of an M G 1 PS system with the mean delay being the same as that in an equivalent M M 1 system. In the User Centric Model, each user queue is modeled as a separate G G 1 queue.
|
{
"cite_N": [
"@cite_3"
],
"mid": [
"2107403026"
],
"abstract": [
"We consider the following two views of an IEEE 802.11 based Wireless LAN: (i) as seen by the WLAN medium and (ii) as seen by a user. In the system centric view, we model the WLAN medium as a server that allocates its resources to users in a round Robin manner. This resource sharing model not only provides a simple model for the system, it also enables us to derive the channel service rate and the total delay incurred in transmitting a packet. For Poisson arrivals, the resource sharing model takes the form of an M G 1 PS system with the mean delay being the same as that in an equivalent M M 1 system. We then take a user centric view and model each user's queue as a separate G G 1 queue. We derive the probability distributions for the different delay sources, i.e., random back-off time, random number of collisions and random number of successful transmissions from other users. This user centric model can provide insights into understanding access and queuing delays in 802.11 DCF. Finally, we discuss the utility of these models for functions such as capacity analysis, admission control and QoS enforcement."
]
}
|
1009.0448
|
2951672555
|
In this paper, we consider the problem of modelling the average delay in an IEEE 802.11 DCF wireless mesh network with a single root node under light traffic. We derive expression for mean delay for a co-located wireless mesh network, when packet generation is homogeneous Poisson process with rate . We also show how our analysis can be extended for non-homogeneous Poisson packet generation. We model mean delay by decoupling queues into independent M M 1 queues. Extensive simulations are conducted to verify the analytical results.
|
A novel model based on Diffusion approximations has been used to model delay in ad-hoc networks by Bisnik and Abouzeid @cite_4 . The authors, provide scaling laws for delay under probabilistic routing. The authors assume the ad hoc network to follow probabilistic routing methodology (i.e the routing is oblivious to the origin and nature of the packet). In the paper, the authors have considered the problem of characterizing average delay over various network deployments. But for a given network deployment the value of the observed average delay may vary widely in comparison to the value as calculated using the diffusion approximation model. We are interested in a simple model to obtain average delay for a given mesh network as against the average over many random deployments @cite_4 .
|
{
"cite_N": [
"@cite_4"
],
"mid": [
"2119153006"
],
"abstract": [
"In this paper we focus on characterizing the average end-to-end delay and maximum achievable per-node throughput in random access multihop wireless ad hoc networks with stationary nodes. We present an analytical model that takes into account the number of nodes, the random packet arrival process, the extent of locality of traffic, and the back off and collision avoidance mechanisms of random access MAC. We model random access multihop wireless networks as open G G 1 queuing networks and use the diffusion approximation to evaluate closed form expressions for the average end-to-end delay. The mean service time of nodes is derived and used to obtain the maximum achievable per-node throughput. The analytical results obtained here from the queuing network analysis are discussed with regard to similarities and differences from the well established information-theoretic results on throughput and delay scaling laws in ad hoc networks. We perform extensive simulations and verify that the analytical results closely match the results obtained from simulations."
]
}
|
1008.3938
|
1974252294
|
We give the first combinatorial approximation algorithm for Maxcut that beats the trivial 0.5 factor by a constant. The main partitioning procedure is very intuitive, natural, and easily described. It essentially performs a number of random walks and aggregates the information to provide the partition. We can control the running time to get an approximation factor-running time tradeoff. We show that for any constant b > 1.5, there is an O(n^ b ) algorithm that outputs a (0.5+delta)-approximation for Maxcut, where delta = delta(b) is some positive constant. One of the components of our algorithm is a weak local graph partitioning procedure that may be of independent interest. Given a starting vertex @math and a conductance parameter phi, unless a random walk of length ell = O(log n) starting from i mixes rapidly (in terms of phi and ell), we can find a cut of conductance at most phi close to the vertex. The work done per vertex found in the cut is sublinear in n.
|
Trevisan @cite_12 also uses random walks to give approximation algorithms for (as a special case of unique games), although the algorithm only deals with the case when is @math . The property tester for bipartiteness in sparse graphs by Goldreich and Ron @cite_19 is a sublinear time procedure that uses random walks to distinguish graphs where @math from @math . The algorithm, however, does not actually give an approximation to . There is a similarity in flavor to Dinur's proof of the PCP theorem @cite_13 , which uses random walks and majority votes for gap amplification of CSPs. Our algorithm might be seen as some kind of belief propagation, where messages about labels are passed around. For the special case of cubic and maximum degree @math graphs, there has been a study of combinatorial algorithms for @cite_3 @cite_18 @cite_22 . These are based on graph theoretic properites and very different from our algorithms. Combinatorial algorithms for CSP (constraint satisfaction problems) based on LP relaxations have been studied in @cite_10 .
|
{
"cite_N": [
"@cite_18",
"@cite_22",
"@cite_10",
"@cite_3",
"@cite_19",
"@cite_13",
"@cite_12"
],
"mid": [
"1965444148",
"2113647243",
"2084350425",
"2109041494",
"",
"",
"340925356"
],
"abstract": [
"We present an improved semidefinite programming based approximation algorithm for the MAX CUT problem in graphs of maximum degree at most 3. The approximation ratio of the new algorithm is at least 0.9326. This improves, and also somewhat simplifies, a result of Feige, Karpinski and Langberg. We also observe that results of Hopkins and Staton and of Bondy and Locke yield a simple combinatorial 4 5-approximation algorithm for the problem. Finally, we present a combinatorial 22 27-approximation algorithm for the MAX CUT problem for regular cubic graphs.",
"The best approximation algorithm for Max Cut in graphs of maximum degree 3 uses semidefinite programming, has approximation ratio 0.9326, and its running time is @Q(n^3^.^5logn); but the best combinatorial algorithms have approximation ratio 4 5 only, achieved in O(n^2) time [J.A. Bondy, S.C. Locke, J. Graph Theory 10 (1986) 477-504; E. Halperin, et al, J. Algorithms 53 (2004) 169-185]. Here we present an improved combinatorial approximation, which is a 5 6-approximation algorithm that runs in O(n^2) time, perhaps improvable even to O(n). Our main tool is a new type of vertex decomposition for graphs of maximum degree 3.",
"We consider the problem MAX CSP over multi-valued domains with variables ranging over sets of size si ≤ s and constraints involving kj ≤ k variables. We study two algorithms with approximation ratios A and B. respectively, so we obtain a solution with approximation ratio max (A, B).The first algorithm is based on the linear programming algorithm of Serna, Trevisan, and Xhafa [Proc. 15th Annual Symp. on Theoret. Aspects of Comput. Sci., 1998, pp. 488-498] and gives ratio A which is bounded below by s1-k. For k = 2, our bound in terms of the individual set sizes is the minimum over all constraints involving two variables of (1 2√s1+ 1 2√s2)2, where s1 and s2 are the set sizes for the two variables.We then give a simple combinatorial algorithm which has approximation ratio B, with B > A e. The bound is greater than s1-k e in general, and greater than s1-k(1 - (s - 1) 2(k - 1)) for s ≤ k - 1, thus close to the s1-k linear programming bound for large k. For k = 2, the bound is 4 9 if s = 2, 1 2(s - 1) if s ≥ 3, and in general greater than the minimum of 1 4S1 + 1 4s2 over constraints with set sizes s1 and s2, thus within a factor of two of the linear programming bound.For the case of k = 2 and s = 2 we prove an integrality gap of 4 9 (1 + O(n-1 2)). This shows that our analysis is tight for any method that uses the linear programming upper bound.",
"On presente un algorithme polynomial permettant de determiner un sous-graphe biparti d'un graphe G sans triangle ni boucle de degre maximum 3, contenant au moins 4 5 des aretes de G. On caracterise le dodecaedre et le graphe de Petersen comme les seuls graphes connexes 3-reguliers sans triangle ni boucle pour lesquels il n'existe pas de sous graphe biparti ayant un nombre d'aretes superieur a cette proportion",
"",
"",
"A portable device for removing and collecting dust, particularly from vehicle friction brake and clutch assemblies, comprises a base having a high efficiency filter and vacuum assembly, a transparent evacuation hood designed to surround the brake or clutch, a vacuum hose for removing contaminated air from the hood to the base, and adjustable means for disposing the hood at variable heights above the base. The filter and vacuum assembly, which may serve as a stand alone unit having application outside the automotive field, is oriented such that a disposable filter is located in a compartment upstream of the vacuum motors and may be removed from the base while the vacuum motors are running, thereby preventing dispersal of hazardous materials during the filter changing operation. Apertures provided in the hood include two gloved portals for allowing a worker's hands and arms to have unimpeded access to the brake and clutch assemblies. A highly portable embodiment includes an inflatable hood."
]
}
|
1008.2909
|
1657232375
|
Multi-dimensional arrays are among the most fundamental and most useful data structures of all. In C++, excellent template libraries exist for arrays whose dimension is fixed at runtime. Arrays whose dimension can change at runtime have been implemented in C. However, a generic object-oriented C++ implementation of runtime-flexible arrays has so far been missing. In this article, we discuss our new implementation called Marray, a package of class templates that fills this gap. Marray is based on views as an underlying concept. This concept brings some of the flexibility known from script languages such as R and MATLAB to C++. Marray is free both for commercial and non-commercial use and is publicly available from www.andres.sc marray
|
summarizes the mathematics of runtime-flexible multi-dimensional views and arrays. It is a concise compilation of existing ideas from excellent research articles @cite_6 @cite_17 and text books, e.g. @cite_9 . deals with the C++ implementation of the mathamtical concepts and provides some examples that show how the classes can be used in practice. Readers who prefer a practical introduction are encouraged to read first. discusses already implemented extensions based on the C++0x standard proposal @cite_2 . Section concludes the article.
|
{
"cite_N": [
"@cite_9",
"@cite_2",
"@cite_6",
"@cite_17"
],
"mid": [
"650984362",
"",
"2093680667",
"1581501197"
],
"abstract": [
"Advanced Data Structures presents a comprehensive look at the ideas, analysis, and implementation details of data structures as a specialized topic in applied algorithms. Data structures are how data is stored within a computer, and how one can go about searching for data within. This text examines efficient ways to search and update sets of numbers, intervals, or strings by various data structures, such as search trees, structures for sets of intervals or piece-wise constant functions, orthogonal range search structures, heaps, union-find structures, dynamization and persistence of structures, structures for strings, and hash tables. This is the first volume to show data structures as a crucial algorithmic topic, rather than relegating them as trivial material used to illustrate object-oriented programming methodology, filling a void in the ever-increasing computer science market. Numerous code examples in C and more than 500 references make Advanced Data Structures an indispensable text. topic. Numerous code examples in C and more than 500 references make Advanced Data Structures an indispensable text.",
"",
"In Cpp, multi-dimensional arrays are often used but the language provides limited native support for them. The language, in its Standard Library, supplies sophisticated interfaces for manipulating sequential data, but relies on its bare-bones C heritage for arrays. The MultiArray library, a part of the Boost library collection, enhances a Cpp programmer's tool set with versatile multi-dimensional array abstractions. It includes a general array class template and native array adaptors that support idiomatic array operations and interoperate with Cpp Standard Library containers and algorithms. The arrays share a common interface, expressed as a generic programming concept, in terms of which generic array algorithms can be implemented. We present the library design, introduce a generic interface for array programming, demonstrate how the arrays integrate with the Cpp Standard Library, and discuss the essential aspects of their implementation. Copyright © 2004 John Wiley & Sons, Ltd.",
"The Blitz++ library provides numeric arrays for C++ with efficiency that rivals Fortran, without any language extensions. Blitz++ has features unavailable in Fortran 90 95, such as arbitrary transpose operations, array renaming, tensor notation, partial reductions, multi-component arrays and stencil operators. The library handles parsing and analysis of array expressions on its own using the expression templates technique, and performs optimizations (such as loop transformations) which have until now been the responsibility of compilers."
]
}
|
1008.1827
|
1586904714
|
Motivated by the fact that in many game-theoretic settings, the game analyzed is only an approximation to the game being played, in this work we analyze equilibrium computation for the broad and natural class of bimatrix games that are stable to perturbations. We specifically focus on games with the property that small changes in the payoff matrices do not cause the Nash equilibria of the game to fluctuate wildly. For such games we show how one can compute approximate Nash equilibria more efficiently than the general result of LMM03 , by an amount that depends on the degree of stability of the game and that reduces to their bound in the worst case. Furthermore, we show that for stable games the approximate equilibria found will be close in variation distance to true equilibria, and moreover this holds even if we are given as input only a perturbation of the actual underlying stable game. For uniformly-stable games, where the equilibria fluctuate at most quasi-linearly in the extent of the perturbation, we get a particularly dramatic improvement. Here, we achieve a fully quasi-polynomial-time approximation scheme: that is, we can find @math -approximate equilibria in quasi-polynomial time. This is in marked contrast to the general class of bimatrix games for which finding such approximate equilibria is PPAD-hard. In particular, under the (widely believed) assumption that PPAD is not contained in quasi-polynomial time, our results imply that such uniformly stable games are inherently easier for computation of approximate equilibria than general bimatrix games.
|
@cite_11 analyzed the question of finding an approximate Nash in equilibrium in games that satisfy stability with respect to approximation. However, their condition is quite restrictive in that it focuses only on games that have the property that all the Nash equilibria are close together, thus eliminating from consideration most common games. By contrast, our perturbation stability notion, which (as mentioned above) can be shown to be a generalization of their notion, captures many more realistic situations. Our upper bounds on approximate equilibria can be viewed as generalizing the corresponding result of @cite_11 and it is significantly more challenging technically. Moreover, our lower bounds also apply to the stability notion of @cite_11 and provide the first (nontrivial) results about the interesting range of parameters for that stability notion as well.
|
{
"cite_N": [
"@cite_11"
],
"mid": [
"1857802886"
],
"abstract": [
"One reason for wanting to compute an (approximate) Nash equilibrium of a game is to predict how players will play. However, if the game has multiple equilibria that are far apart, or e-equilibria that are far in variation distance from the true Nash equilibrium strategies, then this prediction may not be possible even in principle. Motivated by this consideration, in this paper we define the notion of games that are approximation stable, meaning that all e-approximate equilibria are contained inside a small ball of radius Δ around a true equilibrium, and investigate a number of their properties. Many natural small games such as matching pennies and rock-paper-scissors are indeed approximation stable. We show furthermore there exist 2-player n-by-n approximationstable games in which the Nash equilibrium and all approximate equilibria have support Ω(log n). On the other hand, we show all (e,Δ) approximation-stable games must have an e-equilibrium of support O(Δ2-o(1) e2 log n), yielding an immediate nO(Δ2-o(1) e2 log n) -time algorithm, improving over the bound of [11] for games satisfying this condition. We in addition give a polynomial-time algorithm for the case that Δ and e are sufficiently close together. We also consider an inverse property, namely that all non-approximate equilibria are far from some true equilibrium, and give an efficient algorithm for games satisfying that condition."
]
}
|
1008.2267
|
2949402789
|
The ongoing debate over net neutrality covers a broad set of issues related to the regulation of public networks. In two ways, we extend an idealized usage-priced game-theoretic framework based on a common linear demand-response model. First, we study the impact of "side payments" among a plurality of Internet service (access) providers and content providers. In the non-monopolistic case, our analysis reveals an interesting "paradox" of side payments in that overall revenues are reduced for those that receive them. Second, assuming different application types (e.g., HTTP web traffic, peer-to-peer file sharing, media streaming, interactive VoIP), we extend this model to accommodate differential pricing among them in order to study the issue of application neutrality. Revenues for neutral and non-neutral pricing are compared for the case of two application types.
|
Previously, we considered certain net neutrality related issues like side payments and premium service fees (e), limiting our consideration to monopolistic providers @cite_14 . In the following, we extend this model to include competition between multiple identical providers (actually based on an idea sketched in Section IV of @cite_14 ).
|
{
"cite_N": [
"@cite_14"
],
"mid": [
"2953343454"
],
"abstract": [
"Hahn and Wallsten wrote that network neutrality \"usually means that broadband service providers charge consumers only once for Internet access, do not favor one content provider over another, and do not charge content providers for sending information over broadband lines to end users.\" In this paper we study the implications of non-neutral behaviors under a simple model of linear demand-response to usage-based prices. We take into account advertising revenues and consider both cooperative and non-cooperative scenarios. In particular, we model the impact of side-payments between service and content providers. We also consider the effect of service discrimination by access providers, as well as an extension of our model to non-monopolistic content providers."
]
}
|
1008.2267
|
2949402789
|
The ongoing debate over net neutrality covers a broad set of issues related to the regulation of public networks. In two ways, we extend an idealized usage-priced game-theoretic framework based on a common linear demand-response model. First, we study the impact of "side payments" among a plurality of Internet service (access) providers and content providers. In the non-monopolistic case, our analysis reveals an interesting "paradox" of side payments in that overall revenues are reduced for those that receive them. Second, assuming different application types (e.g., HTTP web traffic, peer-to-peer file sharing, media streaming, interactive VoIP), we extend this model to accommodate differential pricing among them in order to study the issue of application neutrality. Revenues for neutral and non-neutral pricing are compared for the case of two application types.
|
The validity of the ISPs' argument that net neutrality is a disincentive for bandwidth expansion has been studied in @cite_8 . In the proposed framework, incentives for broadband providers to expand infrastructure capacity turned out to be higher under net neutrality, with ISPs tending to under- or over-invest in the non-neutral regime.
|
{
"cite_N": [
"@cite_8"
],
"mid": [
"2040869303"
],
"abstract": [
"The status quo of prohibiting broadband service providers from charging websites for preferential access to their customers---the bedrock principle of net neutrality (NN)---is under fierce debate. We develop a game-theoretic model to address two critical issues of NN: (1) Who are gainers and losers of abandoning NN? (2) Will broadband service providers have greater incentive to expand their capacity without NN? We find that if the principle of NN is abolished, the broadband service provider stands to gain from the arrangement, as a result of extracting the preferential access fees from content providers. Content providers are thus left worse off, mirroring the stances of the two sides in the debate. Depending on parameter values in our framework, consumer surplus either does not change or is higher in the short run. When compared to the baseline case under NN, social welfare in the short run increases if one content provider pays for preferential treatment but remains unchanged if both content providers pay. Finally, we find that the incentive to expand infrastructure capacity for the broadband service provider and its optimal capacity choice under NN are higher than those under the no-net-neutrality (NNN) regime, except in some specific cases. Under NN, the broadband service provider always invests in broadband infrastructure at the socially optimal level but either under-or overinvests in infrastructure capacity in the absence of NN."
]
}
|
1008.2267
|
2949402789
|
The ongoing debate over net neutrality covers a broad set of issues related to the regulation of public networks. In two ways, we extend an idealized usage-priced game-theoretic framework based on a common linear demand-response model. First, we study the impact of "side payments" among a plurality of Internet service (access) providers and content providers. In the non-monopolistic case, our analysis reveals an interesting "paradox" of side payments in that overall revenues are reduced for those that receive them. Second, assuming different application types (e.g., HTTP web traffic, peer-to-peer file sharing, media streaming, interactive VoIP), we extend this model to accommodate differential pricing among them in order to study the issue of application neutrality. Revenues for neutral and non-neutral pricing are compared for the case of two application types.
|
@cite_12 deals with the question of side payments and deploys a framework in which CPs can subsidize consumers' connectivity costs. The authors compare an unregulated regime with a net neutral'' one where restrictions apply on the maximum price ISPs can charge content providers. They find out that, even in the neutral case, CPs can benefit from sharing revenue from end users if the latter are sufficiently price sensitive (and the cost of connectivity is low enough). Their framework is insightful, but does not take CP revenues into consideration.
|
{
"cite_N": [
"@cite_12"
],
"mid": [
"2138744708"
],
"abstract": [
"Pricing content-providers for connectivity to end- users and setting connection parameters based on the price is an evolving model on the Internet. The implications are heavily debated in telecom policy circles, and some advocates of \"Network Neutrality\" have opposed price based differentiation in connectivity. However, pricing content providers can possibly subsidize the end-user's cost of connectivity, and the consequent increase in end-user demand can benefit ISPs and content providers. This paper provides a framework to quantify the precise trade-off in the distribution of benefits among ISPs, content-providers, and end-users. The framework generalizes the well-known utility maximization based rate allocation model, which has been extensively studied as an interplay between the ISP and the end-users, to incorporate pricing of content-providers. We derive the resulting equilibrium prices and data rates in two different ISP market conditions: competition and monopoly. Network neutrality based restriction on content-provider pricing is then modeled as a constraint on the maximum price that can be charged to content-providers. We demonstrate that, in addition to gains in total and end- user surplus, content-provider experiences a net surplus from participation in rate allocation under low cost of connectivity. The surplus gains are, however, limited under monopoly conditions in comparison to competition in the ISP market."
]
}
|
1008.2267
|
2949402789
|
The ongoing debate over net neutrality covers a broad set of issues related to the regulation of public networks. In two ways, we extend an idealized usage-priced game-theoretic framework based on a common linear demand-response model. First, we study the impact of "side payments" among a plurality of Internet service (access) providers and content providers. In the non-monopolistic case, our analysis reveals an interesting "paradox" of side payments in that overall revenues are reduced for those that receive them. Second, assuming different application types (e.g., HTTP web traffic, peer-to-peer file sharing, media streaming, interactive VoIP), we extend this model to accommodate differential pricing among them in order to study the issue of application neutrality. Revenues for neutral and non-neutral pricing are compared for the case of two application types.
|
In @cite_10 , the authors address whether local ISPs should be allowed to charge remote CPs for the right'' to reach their end users (again, this is the side payment issue). Through study of a two-sided market, they determine when neutrality regulations are harmful depending on the parameters characterizing advertising rates and consumer price sensitivity As in @cite_12 , the outcome essentially depends on end users' price sensitivity, but here it is furthermore related to CP (advertising) revenues. .
|
{
"cite_N": [
"@cite_10",
"@cite_12"
],
"mid": [
"2006549632",
"2138744708"
],
"abstract": [
"We address whether local ISPs should be allowed to charge content providers, who derive advertising revenue, for the right to access end-users. We compare two-sided pricing where such charges are allowed to one-sided pricing where they are prohibited. By deriving provider equilibrium actions (prices and investments), we determine which regime is welfare-superior as a function of a few key parameters. We find that two-sided pricing is more favorable when the ratio between parameters characterizing advertising rates and end-user price sensitivity is either low or high.",
"Pricing content-providers for connectivity to end- users and setting connection parameters based on the price is an evolving model on the Internet. The implications are heavily debated in telecom policy circles, and some advocates of \"Network Neutrality\" have opposed price based differentiation in connectivity. However, pricing content providers can possibly subsidize the end-user's cost of connectivity, and the consequent increase in end-user demand can benefit ISPs and content providers. This paper provides a framework to quantify the precise trade-off in the distribution of benefits among ISPs, content-providers, and end-users. The framework generalizes the well-known utility maximization based rate allocation model, which has been extensively studied as an interplay between the ISP and the end-users, to incorporate pricing of content-providers. We derive the resulting equilibrium prices and data rates in two different ISP market conditions: competition and monopoly. Network neutrality based restriction on content-provider pricing is then modeled as a constraint on the maximum price that can be charged to content-providers. We demonstrate that, in addition to gains in total and end- user surplus, content-provider experiences a net surplus from participation in rate allocation under low cost of connectivity. The surplus gains are, however, limited under monopoly conditions in comparison to competition in the ISP market."
]
}
|
1008.2626
|
1746953810
|
New applications of data mining, such as in biology, bioinformatics, or sociology, are faced with large datasets structured as graphs. We introduce a novel class of tree-shaped patterns called tree queries, and present algorithms for mining tree queries and tree-query associations in a large data graph. Novel about our class of patterns is that they can contain constants, and can contain existential nodes which are not counted when determining the number of occurrences of the pattern in the data graph. Our algorithms have a number of provable optimality properties, which are based on the theory of conjunctive database queries. We propose a practical, database-oriented implementation in SQL, and show that the approach works in practice through experiments on data about food webs, protein interactions, and citation analysis.
|
Approaches to graph mining, especially mining for frequent patterns or association rules, can be divided in two major categories which are not to be confused. In transactional graph mining, e.g., @cite_8 @cite_19 @cite_24 @cite_32 @cite_26 @cite_5 @cite_39 , the dataset consists of many small data graphs which we call transactions, and the task is to discover patterns that occur at least once in a sufficient number of transactions. (Approaches from machine learning or inductive logic programming usually call the small data graphs examples'' instead of transactions.) In single-graph mining the dataset is a single large data graph, and the task is to discover patterns that occur sufficiently often in the dataset.
|
{
"cite_N": [
"@cite_26",
"@cite_8",
"@cite_32",
"@cite_39",
"@cite_24",
"@cite_19",
"@cite_5"
],
"mid": [
"2168209541",
"2113243831",
"1711073729",
"",
"2118349699",
"2128994830",
"2170726034"
],
"abstract": [
"Over the years, frequent itemset discovery algorithms have been used to find interesting patterns in various application areas. However, as data mining techniques are being increasingly applied to nontraditional domains, existing frequent pattern discovery approaches cannot be used. This is because the transaction framework that is assumed by these algorithms cannot be used to effectively model the data sets in these domains. An alternate way of modeling the objects in these data sets is to represent them using graphs. Within that model, one way of formulating the frequent pattern discovery problem is that of discovering subgraphs that occur frequently over the entire set of graphs. We present a computationally efficient algorithm, called FSG, for finding all frequent subgraphs in large graph data sets. We experimentally evaluate the performance of FSG using a variety of real and synthetic data sets. Our results show that despite the underlying complexity associated with frequent subgraph discovery, FSG is effective in finding all frequently occurring subgraphs in data sets containing more than 200,000 graph transactions and scales linearly with respect to the size of the data set.",
"Discovery of frequent patterns has been studied in a variety of data mining settings. In its simplest form, known from association rule mining, the task is to discover all frequent itemsets, i.e., all combinations of items that are found in a sufficient number of examples. The fundamental task of association rule and frequent set discovery has been extended in various directions, allowing more useful patterns to be discovered with special purpose algorithms. We present WARMR, a general purpose inductive logic programming algorithm that addresses frequent query discovery: a very general DATALOG formulation of the frequent pattern discovery problem. The motivation for this novel approach is twofold. First, exploratory data mining is well supported: WARMR offers the flexibility required to experiment with standard and in particular novel settings not supported by special purpose algorithms. Also, application prototypes based on WARMR can be used as benchmarks in the comparison and evaluation of new special purpose algorithms. Second, the unified representation gives insight to the blurred picture of the frequent pattern discovery domain. Within the DATALOG formulation a number of dimensions appear that relink diverged settings. We demonstrate the frequent query approach and its use on two applications, one in alarm analysis, and one in a chemical toxicology domain.",
"The derivation of frequent subgraphs from a dataset of labeled graphs has high computational complexity because the hard problems of isomorphism and subgraph isomorphism have to be solved as part of this derivation. To deal with this computational complexity, all previous approaches have focused on one particular kind of graph. In this paper, we propose an approach to conduct a complete search for various classes of frequent subgraphs in a massive dataset of labeled graphs within a practical time. The power of our approach comes from the algebraic representation of graphs, its associated operations and well-organized bias constraints to limit the search space efficiently. The performance has been evaluated using real world datasets, and the high scalability and flexibility of our approach have been confirmed with respect to the amount of data and the computation time.",
"",
"Frequent subgraph mining is an active research topic in the data mining community. A graph is a general model to represent data and has been used in many domains like cheminformatics and bioinformatics. Mining patterns from graph databases is challenging since graph related operations, such as subgraph testing, generally have higher time complexity than the corresponding operations on itemsets, sequences, and trees, which have been studied extensively. We propose a novel frequent subgraph mining algorithm: FFSM, which employs a vertical search scheme within an algebraic graph framework we have developed to reduce the number of redundant candidates proposed. Our empirical study on synthetic and real datasets demonstrates that FFSM achieves a substantial performance gain over the current start-of-the-art subgraph mining algorithm gSpan.",
"In recent years there has been an increased interest in algorithms that can perform frequent pattern discovery in large databases of graph structured objects. While the frequent connected subgraph mining problem for tree datasets can be solved in incremental polynomial time, it becomes intractable for arbitrary graph databases. Existing approaches have therefore resorted to various heuristic strategies and restrictions of the search space, but have not identified a practically relevant tractable graph class beyond trees. In this paper, we define the class of so called tenuous outerplanar graphs, a strict generalization of trees, develop a frequent subgraph mining algorithm for tenuous outerplanar graphs that works in incremental polynomial time, and evaluate the algorithm empirically on the NCI molecular graph dataset.",
"We investigate new approaches for frequent graph-based pattern mining in graph datasets and propose a novel algorithm called gSpan (graph-based substructure pattern mining), which discovers frequent substructures without candidate generation. gSpan builds a new lexicographic order among graphs, and maps each graph to a unique minimum DFS code as its canonical label. Based on this lexicographic order gSpan adopts the depth-first search strategy to mine frequent connected subgraphs efficiently. Our performance study shows that gSpan substantially outperforms previous algorithms, sometimes by an order of magnitude."
]
}
|
1008.2626
|
1746953810
|
New applications of data mining, such as in biology, bioinformatics, or sociology, are faced with large datasets structured as graphs. We introduce a novel class of tree-shaped patterns called tree queries, and present algorithms for mining tree queries and tree-query associations in a large data graph. Novel about our class of patterns is that they can contain constants, and can contain existential nodes which are not counted when determining the number of occurrences of the pattern in the data graph. Our algorithms have a number of provable optimality properties, which are based on the theory of conjunctive database queries. We propose a practical, database-oriented implementation in SQL, and show that the approach works in practice through experiments on data about food webs, protein interactions, and citation analysis.
|
Cook and Holder @cite_6 apply in their SUBDUE system the minimum description length (MDL) principle to discover substructures in a labeled data graph. The MDL principle states that the best pattern, is that pattern that minimizes the description length of the complete data graph. Hence, in SUBDUE a pattern is evaluated on how well it can compress the entire dataset. The input for the SUBDUE system is a labeled data graph; nodes and edges are labeled with non-unique labels. This is in contrast with the unique labels ( constants') in our system. But as we already noted, non-unique node labels and edge-labels can easily be simulated by constants, but the converse is not obvious. The SUBDUE system only mines patterns, no association rules.
|
{
"cite_N": [
"@cite_6"
],
"mid": [
"2124996875"
],
"abstract": [
"The ability to identify interesting and repetitive substructures is an essential component to discovering knowledge in structural data. We describe a new version of our SUBDUE substructure discovery system based on the minimum description length principle. The SUBDUE system discovers substructures that compress the original data and represent structural concepts in the data. By replacing previously-discovered substructures in the data, multiple passes of SUBDUE produce a hierarchical description of the structural regularities in the data. SUBDUE uses a computationally-bounded inexact graph match that identifies similar, but not identical, instances of a substructure and finds an approximate measure of closeness of two substructures when under computational constraints. In addition to the minimumdescription length principle, other background knowledge can be used by SUBDUE to guide the search towards more appropriate substructures. Experiments in a variety of domains demonstrate SUBDUE's ability to find substructures capable of compressing the original data and to discover structural concepts important to the domain."
]
}
|
1008.2626
|
1746953810
|
New applications of data mining, such as in biology, bioinformatics, or sociology, are faced with large datasets structured as graphs. We introduce a novel class of tree-shaped patterns called tree queries, and present algorithms for mining tree queries and tree-query associations in a large data graph. Novel about our class of patterns is that they can contain constants, and can contain existential nodes which are not counted when determining the number of occurrences of the pattern in the data graph. Our algorithms have a number of provable optimality properties, which are based on the theory of conjunctive database queries. We propose a practical, database-oriented implementation in SQL, and show that the approach works in practice through experiments on data about food webs, protein interactions, and citation analysis.
|
The related work that was most influential for us is Warmr @cite_8 @cite_31 , although it belongs to the transactional category. Based on inductive logic programming, patterns in Warmr also feature existential variables and parameters. While not restricted to tree shapes, the queries in Warmr are restricted in another sense so that only transactional mining can be supported. Association rules in Warmr are defined in a naive manner through pattern extension, rather than being founded upon the theory of conjunctive query containment. The Warmr system is also Prolog-oriented, rather than database-oriented, which we believe is fundamental to mining of single large data graphs, and which allows a more uniform and parallel treatment of parameter instantiations, as we will show in this paper. Finally, Warmr does not seriously attempt to avoid the generation of duplicates. Yet, Warmr remains a pathbreaking work, which did not receive sufficient follow-up in the data mining community at large. We hope our present work represents an improvement in this respect. Many of the improvements we make to Warmr were already envisaged (but without concrete algorithms) in 2002 by Goethals and the second author @cite_9 .
|
{
"cite_N": [
"@cite_9",
"@cite_31",
"@cite_8"
],
"mid": [
"2001199771",
"1570983415",
"2113243831"
],
"abstract": [
"In recent years, the problem of association rule mining in transactional data has been well studied. We propose to extend the discovery of classical association rules to the discovery of association rules of conjunctive queries in arbitrary relational data, inspired by the WARMR algorithm, developed by Dehaspe and Toivonen, that discovers association rules over a limited set of conjunctive queries. Conjunctive query evaluation in relational databases is well understood, but still poses some great challenges when approached from a discovery viewpoint in which patterns are generated and evaluated with respect to some well defined search space and pruning operators.",
"Within KDD, the discovery of frequent patterns has been studied in a variety of settings. In its simplest form, known from association rule mining, the task is to discover all frequent item sets, i.e., all combinations of items that are found in a sufficient number of examples. We present algorithms for relational association rule discovery that are well-suited for exploratory data mining. They offer the flexibility required to experiment with examples more complex than feature vectors and patterns more complex than item sets.",
"Discovery of frequent patterns has been studied in a variety of data mining settings. In its simplest form, known from association rule mining, the task is to discover all frequent itemsets, i.e., all combinations of items that are found in a sufficient number of examples. The fundamental task of association rule and frequent set discovery has been extended in various directions, allowing more useful patterns to be discovered with special purpose algorithms. We present WARMR, a general purpose inductive logic programming algorithm that addresses frequent query discovery: a very general DATALOG formulation of the frequent pattern discovery problem. The motivation for this novel approach is twofold. First, exploratory data mining is well supported: WARMR offers the flexibility required to experiment with standard and in particular novel settings not supported by special purpose algorithms. Also, application prototypes based on WARMR can be used as benchmarks in the comparison and evaluation of new special purpose algorithms. Second, the unified representation gives insight to the blurred picture of the frequent pattern discovery domain. Within the DATALOG formulation a number of dimensions appear that relink diverged settings. We demonstrate the frequent query approach and its use on two applications, one in alarm analysis, and one in a chemical toxicology domain."
]
}
|
1008.0045
|
2953365636
|
Random linear network codes can be designed and implemented in a distributed manner, with low computational complexity. However, these codes are classically implemented over finite fields whose size depends on some global network parameters (size of the network, the number of sinks) that may not be known prior to code design. Also, if new nodes join the entire network code may have to be redesigned. In this work, we present the first universal and robust distributed linear network coding schemes. Our schemes are universal since they are independent of all network parameters. They are robust since if nodes join or leave, the remaining nodes do not need to change their coding operations and the receivers can still decode. They are distributed since nodes need only have topological information about the part of the network upstream of them, which can be naturally streamed as part of the communication protocol. We present both probabilistic and deterministic schemes that are all asymptotically rate-optimal in the coding block-length, and have guarantees of correctness. Our probabilistic designs are computationally efficient, with order-optimal complexity. Our deterministic designs guarantee zero error decoding, albeit via codes with high computational complexity in general. Our coding schemes are based on network codes over scalable fields". Instead of choosing coding coefficients from one field at every node, each node uses linear coding operations over an effective field-size" that depends on the node's distance from the source node. The analysis of our schemes requires technical tools that may be of independent interest. In particular, we generalize the Schwartz-Zippel lemma by proving a non-uniform version, wherein variables are chosen from sets of possibly different sizes. We also provide a novel robust distributed algorithm to assign unique IDs to network nodes.
|
In the network coding setting, however, the literature is much sparser. The work of @cite_13 proposes robust network codes" that are resilient to network failure patterns. However, the field-size over which coding is performed depends on the number of failure patterns, and hence these codes are not truly universal. Further, the computational complexity of designing such codes is prohibitive. There is also significant work on network coding for packet erasure networks (for instance @cite_18 ). Our codes can tolerate all such errors.
|
{
"cite_N": [
"@cite_18",
"@cite_13"
],
"mid": [
"2123350510",
"2160254188"
],
"abstract": [
"In this paper, a special class of wireless networks, called wireless erasure networks, is considered. In these networks, each node is connected to a set of nodes by possibly correlated erasure channels. The network model incorporates the broadcast nature of the wireless environment by requiring each node to send the same signal on all outgoing channels. However, we assume there is no interference in reception. Such models are therefore appropriate for wireless networks where all information transmission is packetized and where some mechanism for interference avoidance is already built in. This paper looks at multicast problems over these networks. The capacity under the assumption that erasure locations on all the links of the network are provided to the destinations is obtained. It turns out that the capacity region has a nice max-flow min-cut interpretation. The definition of cut-capacity in these networks incorporates the broadcast property of the wireless medium. It is further shown that linear coding at nodes in the network suffices to achieve the capacity region. Finally, the performance of different coding schemes in these networks when no side information is available to the destinations is analyzed",
"We consider the issue of network capacity. Recent work by Li and Yeung examined the network capacity of multicast networks and related capacity to cutsets. Capacity is achieved by coding over a network. We present a new framework for studying networks and their capacity. Our framework, based on algebraic methods, is surprisingly simple and effective. For networks which are restricted to using linear codes (we make the meaning of linear codes precise, since the codes are not bit-wise linear), we find necessary and sufficient conditions for any given set of connections to be achievable over a given network. For multicast connections, linear codes are not a restrictive assumption, since all achievable connections can be achieved using linear codes. Moreover, coding can be used to maintain connections after permanent failures, such as the removal of an edge from the network. We show necessary and sufficient conditions for a set of connections to be robust to a set of permanent failures. For multicast connections, we show the rather surprising result that, if a multicast connection is achievable under different failure scenarios, a single static code can ensure robustness of the connection under all of those failure scenarios."
]
}
|
1008.0053
|
1561448485
|
The characteristics of wireless communication channels may vary with time due to fading, environmental changes and movement of mobile wireless devices. Tracking and estimating channel gains of wireless channels is therefore a fundamentally important element of many wireless communication systems. In particular, the receivers in many wireless networks need to estimate the channel gains by means of a training sequence. This paper studies the scaling law (on the network size) of the overhead for channel gain monitoring in wireless network. We first investigate the scenario in which a receiver needs to track the channel gains with respect to multiple transmitters. To be concrete, suppose that there are n transmitters, and that in the current round of channel-gain estimation, no more than k channels suffer significant variations since the last round. We proves that " (k ((n+1) k)) time slots" is the minimum number of time slots needed to catch up with the k varied channels. At the same time, we propose a novel channel-gain monitoring scheme named ADMOT to achieve the overhead lower-bound. ADMOT leverages recent advances in compressive sensing in signal processing and interference processing in wireless communication, to enable the receiver to estimate all n channels in a reliable and computationally efficient manner within O(k ((n+1) k)) time slots. To our best knowledge, all previous channel-tracking schemes require (n) time slots regardless of k. Note that based on above results for single receiver scenario, the scaling law of general setting is achieved in which there are multiple transmitters, relay nodes and receivers.
|
(a) Channel monitoring in wireless networks . The works @cite_2 @cite_13 @cite_0 @cite_16 @cite_7 designed probing data and estimation algorithm for estimating channel gains, and the works @cite_29 @cite_14 proposed schemes to estimate channel interference. In the first set of works (which are related to our work), interference has not been shown to be an advantage (compared with nonoverlapping probing signals by different transmitters), and the overhead achieved is @math . Note that in the domain of wireless network coding communication , the work @cite_30 was the first to show the advantage of interference, and later the work @cite_26 proposed an amplify-and-forward relaying strategy for easy implementation.
|
{
"cite_N": [
"@cite_30",
"@cite_14",
"@cite_26",
"@cite_7",
"@cite_29",
"@cite_0",
"@cite_2",
"@cite_16",
"@cite_13"
],
"mid": [
"",
"2135861020",
"2152496949",
"2008270596",
"",
"2132233800",
"2108578477",
"2060994852",
"2137804761"
],
"abstract": [
"",
"The capacity problem in wireless mesh networks can be alleviated by equipping the mesh routers with multiple radios tuned to non-overlapping channels. However, channel assignment presents a challenge because co-located wireless networks are likely to be tuned to the same channels. The resulting increase in interference can adversely affect performance. This paper presents an interference-aware channel assignment algorithm and protocol for multi-radio wireless mesh networks that address this interference problem. The proposed solution intelligently assigns channels to radios to minimize interference within the mesh network and between the mesh network and co-located wireless networks. It utilizes a novel interference estimation technique implemented at each mesh router. An extension to the conflict graph model, the multi-radio conflict graph, is used to model the interference between the routers. We demonstrate our solution’s practicality through the evaluation of a prototype implementation in a IEEE 802.11 testbed. We also report on an extensive evaluation via simulations. In a sample multi-radio scenario, our solution yields performance gains in excess of 40 compared to a static assignment of channels.",
"Traditionally, interference is considered harmful. Wireless networks strive to avoid scheduling multiple transmissions at the same time in order to prevent interference. This paper adopts the opposite approach; it encourages strategically picked senders to interfere. Instead of forwarding packets, routers forward the interfering signals. The destination leverages network-level information to cancel the interference and recover the signal destined to it. The result is analog network coding because it mixes signals not bits. So, what if wireless routers forward signals instead of packets? Theoretically, such an approach doubles the capacity of the canonical 2-way relay network. Surprisingly, it is also practical. We implement our design using software radios and show that it achieves significantly higher throughput than both traditional wireless routing and prior work on wireless network coding.",
"We design pilot-symbol-assisted modulation for carrier frequency offset (CFO) and channel estimation in orthogonal frequency-division multiplexing transmissions over multi-input multi-output frequency-selective fading channels. The CFO and channel-estimation tasks rely on null-subcarrier and nonzero pilot symbols that we insert and hop from block to block. Because we separate CFO and channel estimation from symbol detection, the novel training patterns lead to further decoupled CFO and channel estimators. The performance of our algorithms is investigated analytically, and then compared with an existing approach by simulations.",
"",
"The problems of channel estimation and multiuser detection for direct sequence code division multiple access (DS CDMA) systems employing long spreading codes are considered. With regard to channel estimation, several procedures are proposed based on the least-squares approach, relying on the transmission of known training symbols but not requiring any timing synchronization. In particular, algorithms suited for the forward and reverse links of a single-rate DS CDMA cellular system are developed, and the case of a multirate multicode system, wherein high-rate users are split into multiple virtual low-rate users, is also considered. All of the proposed procedures are recursively implementable with a computational complexity that is quadratic in the processing gain, with regard to the issue of multiuser detection, an adaptive serial interference cancellation (SIC) receiver is considered, where the adaptivity stems from the fact that it is built upon the channel estimates provided by the estimation algorithm. Simulation results show that coupling the proposed estimation algorithms with a SIC receiver may yield, with a much lower computational complexity, performance levels close to those of the ideal linear minimum mean square error (MMSE) receiver, which assumes perfect knowledge of the channels for all of the users and which (in a long-code scenario) has a computational complexity per symbol interval proportional to the third power of the processing gain.",
"The problem of estimating the channel parameters of a new user in a multiuser code-division multiple-access (CDMA) communication system is addressed. It is assumed that the new user transmits training data over a slowly fading multipath channel. The proposed algorithm is based on maximum-likelihood estimation of the channel parameters. First, an asymptotic expression for the likelihood function of channel parameters is derived and a re-parametrization of this likelihood function is proposed. In this re-parametrization, the channel parameters are combined into a discrete time channel filter of symbol period length. Then, expectation-maximization algorithm and alternating projection algorithm-based techniques are considered to extract channel parameters from the estimated discrete channel filter, to maximize the derived asymptotic likelihood function. The performance of the proposed algorithms is evaluated through simulation studies. In addition, the proposed algorithms are compared to previously suggested subspace techniques for multipath channel estimation.",
"A critical issue in applications involving networks of wireless sensors is their ability to synchronize, and mitigate the fading propagation channel effects. Especially when distributed “slave” sensors (nodes) reach-back to communicate with the “master” sensor (gateway), low power cooperative schemes are well motivated. Viewing each node as an antenna element in a multi-input multi-output (MIMO) multi-antenna system, we design pilot patterns to estimate the multiple carrier frequency offsets (CFO), and the multiple channels corresponding to each node-gateway link. Our novel pilot scheme consists of non-zero pilot symbols along with zeros, which separate nodes in a time division multiple access (TDMA) fashion, and lead to low complexity schemes because CFO and channel estimators per node are decoupled. The resulting training algorithm is not only suitable for wireless sensor networks, but also for synchronization and channel estimation of single- and multi-carrier MIMO systems. We investigate the performance of our estimators analytically, and with simulations.",
"We consider correlated MIMO multiple access channels with block fading, where each block is divided into training and data transmission phases. We find the channel estimation and data transmission parameters that jointly optimize the achievable data rate of the system. Our results for the training phase are particularly interesting, where we show that the optimum training signals of the users should be non-overlapping in time. For the data transmission phase, we propose an iterative algorithm that updates the parameters of the users in a round-robin fashion. In particular, the algorithm updates the training and data transmission parameters of a user, when those of the rest of the users are fixed, in a way to maximize the achievable sum-rate in a multiple access channel; and iterates over users in a round-robin fashion."
]
}
|
1008.0053
|
1561448485
|
The characteristics of wireless communication channels may vary with time due to fading, environmental changes and movement of mobile wireless devices. Tracking and estimating channel gains of wireless channels is therefore a fundamentally important element of many wireless communication systems. In particular, the receivers in many wireless networks need to estimate the channel gains by means of a training sequence. This paper studies the scaling law (on the network size) of the overhead for channel gain monitoring in wireless network. We first investigate the scenario in which a receiver needs to track the channel gains with respect to multiple transmitters. To be concrete, suppose that there are n transmitters, and that in the current round of channel-gain estimation, no more than k channels suffer significant variations since the last round. We proves that " (k ((n+1) k)) time slots" is the minimum number of time slots needed to catch up with the k varied channels. At the same time, we propose a novel channel-gain monitoring scheme named ADMOT to achieve the overhead lower-bound. ADMOT leverages recent advances in compressive sensing in signal processing and interference processing in wireless communication, to enable the receiver to estimate all n channels in a reliable and computationally efficient manner within O(k ((n+1) k)) time slots. To our best knowledge, all previous channel-tracking schemes require (n) time slots regardless of k. Note that based on above results for single receiver scenario, the scaling law of general setting is achieved in which there are multiple transmitters, relay nodes and receivers.
|
(b) Compressive sensing for channel estimation . ADMOT proposed in this paper uses recent advances of compressive sensing developed for sparse signal recovering @cite_3 @cite_31 . Compressive sensing was used to recover the sparse features of channels, say channel's delay-Doppler sparsity @cite_8 @cite_9 , channel's sparse multipath structure @cite_28 @cite_12 , sparse-user detection @cite_18 @cite_1 @cite_10 and channel's sparse response @cite_4 . When applying above schemes to estimate all the @math channels from the transmitters, the overhead is at least @math . In contrast, ADMOT uses compressive sensing to handle all channels' differential information (embedded in the overlapped probing) simultaneously, and achieves optimal overhead @math .
|
{
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_8",
"@cite_28",
"@cite_9",
"@cite_1",
"@cite_3",
"@cite_31",
"@cite_10",
"@cite_12"
],
"mid": [
"1643866620",
"2137012645",
"2133698785",
"2161741659",
"",
"2053586860",
"2145096794",
"",
"2109887708",
"2131665779"
],
"abstract": [
"This paper considers a simple on-off random multiple access channel, where n users communicate simultaneously to a single receiver over m degrees of freedom. Each user transmits with probability lambda, where typically lambda n < m << n, and the receiver must detect which users transmitted. We show that when the codebook has i.i.d. Gaussian entries, detecting which users transmitted is mathematically equivalent to a certain sparsity detection problem considered in compressed sensing. Using recent sparsity results, we derive upper and lower bounds on the capacities of these channels. We show that common sparsity detection algorithms, such as lasso and orthogonal matching pursuit (OMP), can be used as tractable multiuser detection schemes and have significantly better performance than single-user detection. These methods do achieve some near-far resistance but--at high signal-to-noise ratios (SNRs)--may achieve capacities far below optimal maximum likelihood detection. We then present a new algorithm, called sequential OMP, that illustrates that iterative detection combined with power ordering or power shaping can significantly improve the high SNR performance. Sequential OMP is analogous to successive interference cancellation in the classic multiple access channel. Our results thereby provide insight into the roles of power control and multiuser detection on random-access signalling.",
"Channels with a sparse impulse response arise in a number of communication applications. Exploiting the sparsity of the channel, we show how an estimate of the channel may be obtained using a matching pursuit (MP) algorithm. This estimate is compared to thresholded variants of the least squares (LS) channel estimate. Among these sparse channel estimates, the MP estimate is computationally much simpler to implement and a shorter training sequence is required to form an accurate channel estimate leading to greater information throughput.",
"We consider the estimation of doubly selective wireless channels within pulse-shaping multicarrier systems (which include OFDM systems as a special case). A new channel estimation technique using the recent methodology of compressed sensing (CS) is proposed. CS-based channel estimation exploits a channel's delay-Doppler sparsity to reduce the number of pilots and, hence, increase spectral efficiency. Simulation results demonstrate a significant reduction of the number of pilots relative to least-squares channel estimation.",
"Multipath signal propagation is the defining characteristic of terrestrial wireless channels. Virtually all existing statistical models for wireless channels are implicitly based on the assumption of rich multipath, which can be traced back to the seminal works of Bello and Kennedy on the wide-sense stationary uncorrelated scattering model, and more recently to the i.i.d. model tor multi-antenna channels proposed by Telatar, and Foschini Gans. However, physical arguments and growing experimental evidence suggest that physical channels encountered in practice exhibit a sparse multipath structure that gets more pronounced as the signal space dimelsion gets large (e.g., due to large bandwidth or large number of antennas). In this paper, we formalize the notion of multipath sparsity and discuss applications of the emerging theory of compressed sensing for efficient estimation of sparse multipath channels.",
"",
"We propose a generic feedback channel model, and compressive sensing based opportunistic feedback protocol for feedback resource (channels) reduction in MIMO Broadcast Channels under the assumption that both feedback and downlink channels are noisy and undergo block Rayleigh fading. The feedback resources are shared and are opportunistically accessed by users who are strong (users above a certain fixed threshold). Strong users send same feedback information on all shared channels. They are identified by the base station via compressive sensing. The proposed protocol is shown to achieve the same sum-rate throughput as that achieved by dedicated feedback schemes, but with feedback channels growing only logarithmically with number of users.",
"This paper considers the model problem of reconstructing an object from incomplete frequency samples. Consider a discrete-time signal f spl isin C sup N and a randomly chosen set of frequencies spl Omega . Is it possible to reconstruct f from the partial knowledge of its Fourier coefficients on the set spl Omega ? A typical result of this paper is as follows. Suppose that f is a superposition of |T| spikes f(t)= spl sigma sub spl tau spl isin T f( spl tau ) spl delta (t- spl tau ) obeying |T| spl les C sub M spl middot (log N) sup -1 spl middot | spl Omega | for some constant C sub M >0. We do not know the locations of the spikes nor their amplitudes. Then with probability at least 1-O(N sup -M ), f can be reconstructed exactly as the solution to the spl lscr sub 1 minimization problem. In short, exact recovery may be obtained by solving a convex optimization problem. We give numerical values for C sub M which depend on the desired probability of success. Our result may be interpreted as a novel kind of nonlinear sampling theorem. In effect, it says that any signal made out of |T| spikes may be recovered by convex programming from almost every set of frequencies of size O(|T| spl middot logN). Moreover, this is nearly optimal in the sense that any method succeeding with probability 1-O(N sup -M ) would in general require a number of frequency samples at least proportional to |T| spl middot logN. The methodology extends to a variety of other situations and higher dimensions. For example, we show how one can reconstruct a piecewise constant (one- or two-dimensional) object from incomplete frequency samples - provided that the number of jumps (discontinuities) obeys the condition above - by minimizing other convex functionals such as the total variation of f.",
"",
"We propose a novel access technique for cellular downlink resource sharing. In particular, a distributed self-selection procedure is combined with the technique of compressed sensing to identify a set of users who are getting simultaneous access to the downlink broadcast channel. The performance of the proposed method is analyzed, and its suitability as an alternate access mechanism is argued.",
"In this paper, we investigate various channel estimators that exploit channel sparsity in the time and or Doppler domain for a multicarrier underwater acoustic system. We use a path-based channel model, where the channel is described by a limited number of paths, each characterized by a delay, Doppler scale, and attenuation factor, and derive the exact inter-carrier-interference (ICI) pattern. For channels that have limited Doppler spread we show that subspace algorithms from the array processing literature, namely Root-MUSIC and ESPRIT, can be applied for channel estimation. For channels with Doppler spread, we adopt a compressed sensing approach, in form of Orthogonal Matching Pursuit (OMP) and Basis Pursuit (BP) algorithms, and utilize overcomplete dictionaries with an increased path delay resolution. Numerical simulation and experimental data of an OFDM block-by-block receiver are used to evaluate the proposed algorithms in comparison to the conventional least-squares (LS) channel estimator. We observe that subspace methods can tolerate small to moderate Doppler effects, and outperform the LS approach when the channel is indeed sparse. On the other hand, compressed sensing algorithms uniformly outperform the LS and subspace methods. Coupled with a channel equalizer mitigating ICI, the compressed sensing algorithms can effectively handle channels with significant Doppler spread."
]
}
|
1008.0053
|
1561448485
|
The characteristics of wireless communication channels may vary with time due to fading, environmental changes and movement of mobile wireless devices. Tracking and estimating channel gains of wireless channels is therefore a fundamentally important element of many wireless communication systems. In particular, the receivers in many wireless networks need to estimate the channel gains by means of a training sequence. This paper studies the scaling law (on the network size) of the overhead for channel gain monitoring in wireless network. We first investigate the scenario in which a receiver needs to track the channel gains with respect to multiple transmitters. To be concrete, suppose that there are n transmitters, and that in the current round of channel-gain estimation, no more than k channels suffer significant variations since the last round. We proves that " (k ((n+1) k)) time slots" is the minimum number of time slots needed to catch up with the k varied channels. At the same time, we propose a novel channel-gain monitoring scheme named ADMOT to achieve the overhead lower-bound. ADMOT leverages recent advances in compressive sensing in signal processing and interference processing in wireless communication, to enable the receiver to estimate all n channels in a reliable and computationally efficient manner within O(k ((n+1) k)) time slots. To our best knowledge, all previous channel-tracking schemes require (n) time slots regardless of k. Note that based on above results for single receiver scenario, the scaling law of general setting is achieved in which there are multiple transmitters, relay nodes and receivers.
|
Note that some previous schemes mentioned above estimate the property of a wideband channel, in which the channel gain varies across the frequency within the channel bandwidth. In contrast, this paper investigates the scaling law (on the network size) of wireless network monitoring. For the sake of exposition, we focus on narrowband channels in which the channel gain is flat across the bandwidth of the channel. We believe that within the same scaling law complexity, ADMOT can easily be generalized to OFDM systems @cite_22 , in which information is carried across multiple narrowband channels.
|
{
"cite_N": [
"@cite_22"
],
"mid": [
"1997834106"
],
"abstract": [
"1. Introduction 2. The wireless channel 3. Point-to-point communication: detection, diversity and channel uncertainty 4. Cellular systems: multiple access and interference management 5. Capacity of wireless channels 6. Multiuser capacity and opportunistic communication 7. MIMO I: spatial multiplexing and channel modeling 8. MIMO II: capacity and multiplexing architectures 9. MIMO III: diversity-multiplexing tradeoff and universal space-time codes 10. MIMO IV: multiuser communication A. Detection and estimation in additive Gaussian noise B. Information theory background."
]
}
|
1008.0064
|
2949441849
|
Erasure codes provide a storage efficient alternative to replication based redundancy in (networked) storage systems. They however entail high communication overhead for maintenance, when some of the encoded fragments are lost and need to be replenished. Such overheads arise from the fundamental need to recreate (or keep separately) first a copy of the whole object before any individual encoded fragment can be generated and replenished. There has been recently intense interest to explore alternatives, most prominent ones being regenerating codes (RGC) and hierarchical codes (HC). We propose as an alternative a new family of codes to improve the maintenance process, which we call self-repairing codes (SRC), with the following salient features: (a) encoded fragments can be repaired directly from other subsets of encoded fragments without having to reconstruct first the original data, ensuring that (b) a fragment is repaired from a fixed number of encoded fragments, the number depending only on how many encoded blocks are missing and independent of which specific blocks are missing. These properties allow for not only low communication overhead to recreate a missing fragment, but also independent reconstruction of different missing fragments in parallel, possibly in different parts of the network. We analyze the static resilience of SRCs with respect to traditional erasure codes, and observe that SRCs incur marginally larger storage overhead in order to achieve the aforementioned properties. The salient SRC properties naturally translate to low communication overheads for reconstruction of lost fragments, and allow reconstruction with lower latency by facilitating repairs in parallel. These desirable properties make self-repairing codes a good and practical candidate for networked distributed storage systems.
|
In @cite_10 , the authors make the simple observation that encoding two bits into three by XORing the two information bits has the property that any two encoded bits can be used to recover the third one. They then propose an iterative construction where, starting from small erasure codes, a bigger code, called hierarchical code (HC), is built by XORing subblocks made by erasure codes or combinations of them. Thus a subset of encoded blocks is typically enough to regenerate a missing one. However, the size of this subset can vary, from the minimal to the maximal number of encoded subblocks, determined by not only the number of lost blocks, but also the specific lost blocks. So given some lost encoded blocks, this strategy may need an arbitrary number of other encoded blocks to repair.
|
{
"cite_N": [
"@cite_10"
],
"mid": [
"2159028410"
],
"abstract": [
"Redundancy is the basic technique to provide reliability in storage systems consisting of multiple components. A redundancy scheme defines how the redundant data are produced and maintained. The simplest redundancy scheme is replication, which however suffers from storage inefficiency. Another approach is erasure coding, which provides the same level of reliability as replication using a significantly smaller amount of storage. When redundant data are lost, they need to be replaced. While replacing replicated data consists in a simple copy, it becomes a complex operation with erasure codes: new data are produced performing a coding over some other available data. The amount of data to be read and coded is d times larger than the amount of data produced. This implies that coding has a larger computational and I O cost, which, for distributed storage systems, translates into increased network traffic. Participants of peer-to-peer systems have ample storage and CPU power, but their network bandwidth may be limited. For these reasons existing coding techniques are not suitable for P2P storage. This work explores the design space between replication and the existing erasure codes. We propose and evaluate a new class of erasure codes, called hierarchical codes, which aims at finding a flexible trade-off that allows the reduction of the network traffic due to maintenance without losing the benefits given by traditional codes."
]
}
|
1008.0485
|
1972643083
|
We consider the one-sided exit problem for (fractionally) integrated random walks and L 'evy processes. We prove that the rate of decrease of the non-exit probability -- the so-called survival exponent -- is universal in this class of processes. In particular, the survival exponent can be inferred from the (fractionally) integrated Brownian motion. This, in particular, extends Sinai's result on the survival exponent for the integrated simple random walk to general random walks with some finite exponential moment. Further, we prove existence and monotonicity of the survival exponent of fractionally integrated processes. We show that this exponent is related to a constant appearing in the study of random polynomials.
|
Li and Shao @cite_10 @cite_20 are the first who aim at building a theory for a whole class of processes. In the mentioned works, the lower tail probability problem ) is studied for Gaussian processes. It is shown that the decrease in ) is indeed on the polynomial scale for many one-dimensional Gaussian processes. However, the technique does not yield values for the survival exponent. An important tool in the study of the above problems for Gaussian processes is the Slepian lemma @cite_2 and a comparable opposite inequality from @cite_17 .
|
{
"cite_N": [
"@cite_10",
"@cite_17",
"@cite_20",
"@cite_2"
],
"mid": [
"2050098623",
"1981891714",
"2015860891",
"2069754508"
],
"abstract": [
"Let X=(Xt)t∈S be a real-valued Gaussian random process indexed by S with mean zero. General upper and lower estimates are given for the lower tail probability P(supt∈S(Xt−Xt0)≤x) as x→0, with t0∈S fixed. In particular, sharp rates are given for fractional Brownian sheet. Furthermore, connections between lower tail probabilities for Gaussian processes with stationary increments and level crossing probabilities for stationary Gaussian processes are studied. Our methods also provide useful information on a random pursuit problem for fractional Brownian particles.",
"Let @math and @math be standard normal random variables with covariance matrices @math and @math , respectively. Slepian's lemma says that if @math for @math , the lower bound @math is at least @math . In this paper an upper bound is given. The usefulness of the upper bound is justified with three concrete applications: (i) the new law of the iterated logarithm of Erdős and Revesz, (ii) the probability that a random polynomial does not have a real zero and (iii) the random pursuit problem for fractional Brownian particles. In particular, a conjecture of Kesten (1992) on the random pursuit problem for Brownian particles is confirmed, which leads to estimates of principal eigenvalues.",
"This paper surveys briefly some recent developments on lower tail probabilities for real valued Gaussian processes. Connections and applications to various problems are discussed. A new and simplified argument is given and it is of independent interest.",
"This paper is concerned with the probability, P[T, r(τ)], that a stationary Gaussian process with mean zero and covariance function r(τ) be nonnegative throughout a given interval of duration T. Several strict upper and lower bounds for P are given, along with some comparison theorems that relate P's for different covariance functions. Similar results are given for F[T, r(τ)], the probability distribution for the interval between two successive zeros of the process."
]
}
|
1008.0485
|
1972643083
|
We consider the one-sided exit problem for (fractionally) integrated random walks and L 'evy processes. We prove that the rate of decrease of the non-exit probability -- the so-called survival exponent -- is universal in this class of processes. In particular, the survival exponent can be inferred from the (fractionally) integrated Brownian motion. This, in particular, extends Sinai's result on the survival exponent for the integrated simple random walk to general random walks with some finite exponential moment. Further, we prove existence and monotonicity of the survival exponent of fractionally integrated processes. We show that this exponent is related to a constant appearing in the study of random polynomials.
|
The survival exponent is unknown for the integrated fractional Brownian motion, see @cite_34 . A related question for the Brownian sheet is solved in @cite_7 @cite_5 . Further references with partial results are @cite_29 @cite_1 @cite_19 .
|
{
"cite_N": [
"@cite_7",
"@cite_29",
"@cite_1",
"@cite_19",
"@cite_5",
"@cite_34"
],
"mid": [
"",
"",
"2128129237",
"2090049826",
"2040360177",
"1993598763"
],
"abstract": [
"",
"",
"Let ( = _x ;x R ) be a mean—zero Gaussian process with covariance @math Here @math where ψ is a non—negative even function on R. Note that o is an even function and o(0) = 0. In this paper we assume that",
"Let X be a symmetric stable process of index α∈ (1,2] and let L x t denote the local time at time t and position x. Let V(t) be such that L t V(t) = sup x∈ ℝL t x . We call V(t) the most visited site of X up to time t. We prove the transience of V, that is, lim t →∞ |V(t)| = ∞ almost surely. An estimate is given concerning the rate of escape of V. The result extends a well-known theorem of Bass and Griffin for Brownian motion. Our approach is based upon an extension of the Ray–Knight theorem for symmetric Markov processes, and relates stable local times to fractional Brownian motion and further to the winding problem for planar Brownian motion.",
"Let @math denote the first passage time to 1 of a standard Brownian motion. It is well known that as @math goes to infinity, @math goes to zero at rate @math , where @math equals @math . The goal of this note is to establish a quantitative, infinite dimensional version of this result. Namely, we will prove the existence of positive and finite constants @math and @math , such that for all @math , @math where @math ' denotes the natural logarithm, and @math is the Fukushima-Malliavin capacity on the space of continuous functions.",
"We consider the integral of fractional Brownian motion (IFBM) and its functionals ξ T on the intervals (0,T) and (−T,T) of the following types: the maximum M T , the position of the maximum, the occupation time above zero etc. We show how the asymptotics of P(ξ T <1)=p T ,T→∞, is related to the Hausdorff dimension of Lagrangian regular points for the inviscid Burgers equation with FBM initial velocity. We produce computational evidence in favor of a power asymptotics for p T . The data do not reject the hypothesis that the exponent θ of the power law is related to the similarity parameter H of fractional Brownian motion as follows: θ=−(1−H) for the interval (−T,T) and θ=−H(1−H) for (0,T). The point 0 is special in that IFBM and its derivative both vanish there."
]
}
|
1008.0485
|
1972643083
|
We consider the one-sided exit problem for (fractionally) integrated random walks and L 'evy processes. We prove that the rate of decrease of the non-exit probability -- the so-called survival exponent -- is universal in this class of processes. In particular, the survival exponent can be inferred from the (fractionally) integrated Brownian motion. This, in particular, extends Sinai's result on the survival exponent for the integrated simple random walk to general random walks with some finite exponential moment. Further, we prove existence and monotonicity of the survival exponent of fractionally integrated processes. We show that this exponent is related to a constant appearing in the study of random polynomials.
|
We further mention a recent work of Simon @cite_31 , where the problem is studied for certain integrated stable L vy processes (in particular, with heavy tails). Even though we also study integrated L vy processes in this paper, the results and techniques are completely disjoint.
|
{
"cite_N": [
"@cite_31"
],
"mid": [
"1835919119"
],
"abstract": [
"Let Z be a strictly a-stable real Levy process (a>1) and X be a fluctuating b-homogeneous additive functional of Z. We investigate the asymptotics of the first passage-time of X above 1, and give a general upper bound. When Z has no negative jumps, we prove that this bound is optimal and does not depend on the homogeneity parameter b. This extends a result of Y. Isozaki and solves partially a conjecture of Z. Shi."
]
}
|
1007.5110
|
2950848077
|
Top- @math queries allow end-users to focus on the most important (top- @math ) answers amongst those which satisfy the query. In traditional databases, a user defined score function assigns a score value to each tuple and a top- @math query returns @math tuples with the highest score. In uncertain database, top- @math answer depends not only on the scores but also on the membership probabilities of tuples. Several top- @math definitions covering different aspects of score-probability interplay have been proposed in recent past R10,R4,R2,R8 . Most of the existing work in this research field is focused on developing efficient algorithms for answering top- @math queries on static uncertain data. Any change (insertion, deletion of a tuple or change in membership probability, score of a tuple) in underlying data forces re-computation of query answers. Such re-computations are not practical considering the dynamic nature of data in many applications. In this paper, we propose a fully dynamic data structure that uses ranking function @math proposed by R8 under the generally adopted model of @math -relations R11 . @math can effectively approximate various other top- @math definitions on uncertain data based on the value of parameter @math . An @math -relation consists of a number of @math -tuples, where @math -tuple is a set of mutually exclusive tuples (up to a constant number) called alternatives. Each @math -tuple in a relation randomly instantiates into one tuple from its alternatives. For an uncertain relation with @math tuples, our structure can answer top- @math queries in @math time, handles an update in @math time and takes @math space. Finally, we evaluate practical efficiency of our structure on both synthetic and real data.
|
Uncertain data management has attracted a lot of attention in recent years due to an increase in the number of application domains that naturally generate uncertain data. These include sensor networks @cite_19 , data cleaning @cite_4 and data integration @cite_2 @cite_14 . Several probabilistic data models have been proposed to capture data uncertainty (e.g TRIO @cite_16 , MYSTIQ @cite_11 , MayBMS @cite_17 , ORION @cite_6 , PrDB @cite_8 ). Virtually all models have adopted possible worlds semantics. Each data model captures tuple uncertainty (existence probabilities are attached to the tuples of the database), or attribute uncertainty (probability distributions are attached to the attributes) or both. Further distinction can be made among these models based on support for correlations. Most of the work in probabilistic databases has either assumed independence or supports restricted correlations, mutual exclusion being the most common. Recently proposed approaches @cite_8 @cite_3 extend the support for any arbitrary correlations.
|
{
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_8",
"@cite_17",
"@cite_6",
"@cite_3",
"@cite_19",
"@cite_2",
"@cite_16",
"@cite_11"
],
"mid": [
"",
"2142151675",
"1963853643",
"643516821",
"2171776999",
"2952367005",
"2097995023",
"1485156179",
"1486776102",
"2078686663"
],
"abstract": [
"",
"",
"Due to numerous applications producing noisy data, e.g., sensor data, experimental data, data from uncurated sources, information extraction, etc., there has been a surge of interest in the development of probabilistic databases. Most probabilistic database models proposed to date, however, fail to meet the challenges of real-world applications on two counts: (1) they often restrict the kinds of uncertainty that the user can represent; and (2) the query processing algorithms often cannot scale up to the needs of the application. In this work, we define a probabilistic database model, PrDB, that uses graphical models, a state-of-the-art probabilistic modeling technique developed within the statistics and machine learning community, to model uncertain data. We show how this results in a rich, complex yet compact probabilistic database model, which can capture the commonly occurring uncertainty models (tuple uncertainty, attribute uncertainty), more complex models (correlated tuples and attributes) and allows compact representation (shared and schema-level correlations). In addition, we show how query evaluation in PrDB translates into inference in an appropriately augmented graphical model. This allows us to easily use any of a myriad of exact and approximate inference algorithms developed within the graphical modeling community. While probabilistic inference provides a generic approach to solving queries, we show how the use of shared correlations, together with a novel inference algorithm that we developed based on bisimulation, can speed query processing significantly. We present a comprehensive experimental evaluation of the proposed techniques and show that even with a few shared correlations, significant speedups are possible.",
"Note: Chapter 6 Reference EPFL-CHAPTER-167070 Record created on 2011-06-22, modified on 2017-05-12",
"Many applications employ sensors for monitoring entities such as temperature and wind speed. A centralized database tracks these entities to enable query processing. Due to continuous changes in these values and limited resources (e.g., network bandwidth and battery power), it is often infeasible to store the exact values at all times. A similar situation exists for moving object environments that track the constantly changing locations of objects. In this environment, it is possible for database queries to produce incorrect or invalid results based upon old data. However, if the degree of error (or uncertainty) between the actual value and the database value is controlled, one can place more confidence in the answers to queries. More generally, query answers can be augmented with probabilistic estimates of the validity of the answers. In this paper we study probabilistic query evaluation based upon uncertain data. A classification of queries is made based upon the nature of the result set. For each class, we develop algorithms for computing probabilistic answers. We address the important issue of measuring the quality of the answers to these queries, and provide algorithms for efficiently pulling data from relevant sensors or moving objects in order to improve the quality of the executing queries. Extensive experiments are performed to examine the effectiveness of several data update policies.",
"Past research on probabilistic databases has studied the problem of answering queries on a static database. Application scenarios of probabilistic databases however often involve the conditioning of a database using additional information in the form of new evidence. The conditioning problem is thus to transform a probabilistic database of priors into a posterior probabilistic database which is materialized for subsequent query processing or further refinement. It turns out that the conditioning problem is closely related to the problem of computing exact tuple confidence values. It is known that exact confidence computation is an NP-hard problem. This has led researchers to consider approximation techniques for confidence computation. However, neither conditioning nor exact confidence computation can be solved using such techniques. In this paper we present efficient techniques for both problems. We study several problem decomposition methods and heuristics that are based on the most successful search techniques from constraint satisfaction, such as the Davis-Putnam algorithm. We complement this with a thorough experimental evaluation of the algorithms proposed. Our experiments show that our exact algorithms scale well to realistic database sizes and can in some scenarios compete with the most efficient previous approximation algorithms.",
"Declarative queries are proving to be an attractive paradigm for ineracting with networks of wireless sensors. The metaphor that \"the sensornet is a database\" is problematic, however, because sensors do not exhaustively represent the data in the real world. In order to map the raw sensor readings onto physical reality, a model of that reality is required to complement the readings. In this paper, we enrich interactive sensor querying with statistical modeling techniques. We demonstrate that such models can help provide answers that are both more meaningful, and, by introducing approximations with probabilistic confidences, significantly more efficient to compute in both time and energy. Utilizing the combination of a model and live data acquisition raises the challenging optimization problem of selecting the best sensor readings to acquire, balancing the increase in the confidence of our answer against the communication and data acquisition costs in the network. We describe an exponential time algorithm for finding the optimal solution to this optimization problem, and a polynomial-time heuristic for identifying solutions that perform well in practice. We evaluate our approach on several real-world sensor-network data sets, taking into account the real measured data and communication quality, demonstrating that our model-based approach provides a high-fidelity representation of the real phenomena and leads to significant performance gains versus traditional data acquisition techniques.",
"The problem of data cleaning, which consists of emoving inconsistencies and errors from original data sets, is well known in the area of decision support systems and data warehouses. However, for non-conventional applications, such as the migration of largely unstructured data into structured one, or the integration of heterogeneous scientific data sets in inter-discipl- inary fields (e.g., in environmental science), existing ETL (Extraction Transformation Loading) and data cleaning tools for writing data cleaning programs are insufficient. The main challenge with them is the design of a data flow graph that effectively generates clean data, and can perform efficiently on large sets of input data. The difficulty with them comes from (i) a lack of clear separation between the logical specification of data transformations and their physical implementation and (ii) the lack of explanation of cleaning results and user interaction facilities to tune a data cleaning program. This paper addresses these two problems and presents a language, an execution model and algorithms that enable users to express data cleaning specifications declaratively and perform the cleaning efficiently. We use as an example a set of bibliographic references used to construct the Citeseer Web site. The underlying data integration problem is to derive structured and clean textual records so that meaningful queries can be performed. Experimental results report on the assessement of the proposed framework for data cleaning.",
"Trio is a new database system that manages not only data, but also the accuracy and lineage of the data. Approximate (uncertain, probabilistic, incomplete, fuzzy, and imprecise!) databases have been proposed in the past, and the lineage problem also has been studied. The goals of the Trio project are to distill previous work into a simple and usable model, design a query language as an understandable extension to SQL, and most importantly build a working system---a system that augments conventional data management with both accuracy and lineage as an integral part of the data. This paper provides numerous motivating applications for Trio and lays out preliminary plans for the data model, query language, and prototype system.",
"We describe a system that supports arbitrarily complex SQL queries on probabilistic databases. The query semantics is based on a probabilistic model and the results are ranked, much like in Information Retrieval. Our main focus is efficient query evaluation, a problem that has not received attention in the past. We describe an optimization algorithm that can compute efficiently most queries. We show, however, that the data complexity of some queries is #P-complete, which implies that these queries do not admit any efficient evaluation methods. For these queries we describe both an approximation algorithm and a Monte-Carlo simulation algorithm."
]
}
|
1007.5110
|
2950848077
|
Top- @math queries allow end-users to focus on the most important (top- @math ) answers amongst those which satisfy the query. In traditional databases, a user defined score function assigns a score value to each tuple and a top- @math query returns @math tuples with the highest score. In uncertain database, top- @math answer depends not only on the scores but also on the membership probabilities of tuples. Several top- @math definitions covering different aspects of score-probability interplay have been proposed in recent past R10,R4,R2,R8 . Most of the existing work in this research field is focused on developing efficient algorithms for answering top- @math queries on static uncertain data. Any change (insertion, deletion of a tuple or change in membership probability, score of a tuple) in underlying data forces re-computation of query answers. Such re-computations are not practical considering the dynamic nature of data in many applications. In this paper, we propose a fully dynamic data structure that uses ranking function @math proposed by R8 under the generally adopted model of @math -relations R11 . @math can effectively approximate various other top- @math definitions on uncertain data based on the value of parameter @math . An @math -relation consists of a number of @math -tuples, where @math -tuple is a set of mutually exclusive tuples (up to a constant number) called alternatives. Each @math -tuple in a relation randomly instantiates into one tuple from its alternatives. For an uncertain relation with @math tuples, our structure can answer top- @math queries in @math time, handles an update in @math time and takes @math space. Finally, we evaluate practical efficiency of our structure on both synthetic and real data.
|
Efforts have been made in recent times to extend the semantics of top- @math '' to uncertain databases. @cite_5 defined the problem of ranking over uncertain databases. They proposed two ranking functions, namely U-Top @math and U- @math Ranks , and proposed algorithms for each of them. Improved algorithms for the same ranking functions were presented later by @cite_15 . @cite_1 proposed another top- @math definition PT- @math () and proposed efficient solutions. @cite_0 defined number of key properties satisfied by top- @math " over deterministic data including exact- @math , containment, unique-rank, value-invariance, and stability . With each of the existing top- @math definition lacking one or more of these properties, Cormode at al. @cite_0 proposed yet another ranking function expected-rank . As the list of top- @math definitions continued to grow, @cite_9 argued that a single specific ranking function may not be appropriate to rank different uncertain databases and empirically illustrated the diverse, conflicting nature of parameterized ranking functions that generalize or can approximate many know ranking functions.
|
{
"cite_N": [
"@cite_9",
"@cite_1",
"@cite_0",
"@cite_5",
"@cite_15"
],
"mid": [
"",
"2041763948",
"2140237757",
"2138271690",
"2125402103"
],
"abstract": [
"",
"Uncertain data is inherent in a few important applications such as environmental surveillance and mobile object tracking. Top-k queries (also known as ranking queries) are often natural and useful in analyzing uncertain data in those applications. In this paper, we study the problem of answering probabilistic threshold top-k queries on uncertain data, which computes uncertain records taking a probability of at least p to be in the top-k list where p is a user specified probability threshold. We present an efficient exact algorithm, a fast sampling algorithm, and a Poisson approximation based algorithm. An empirical study using real and synthetic data sets verifies the effectiveness of probabilistic threshold top-k queries and the efficiency of our methods.",
"When dealing with massive quantities of data, top-k queries are a powerful technique for returning only the k most relevant tuples for inspection, based on a scoring function. The problem of efficiently answering such ranking queries has been studied and analyzed extensively within traditional database settings. The importance of the top-k is perhaps even greater in probabilistic databases, where a relation can encode exponentially many possible worlds. There have been several recent attempts to propose definitions and algorithms for ranking queries over probabilistic data. However, these all lack many of the intuitive properties of a top-k over deterministic data. Specifically, we define a number of fundamental properties, including exact-k, containment, unique-rank, value-invariance, and stability, which are all satisfied by ranking queries on certain data. We argue that all these conditions should also be fulfilled by any reasonable definition for ranking uncertain data. Unfortunately, none of the existing definitions is able to achieve this. To remedy this shortcoming, this work proposes an intuitive new approach of expected rank. This uses the well-founded notion of the expected rank of each tuple across all possible worlds as the basis of the ranking. We are able to prove that, in contrast to all existing approaches, the expected rank satisfies all the required properties for a ranking query. We provide efficient solutions to compute this ranking across the major models of uncertain data, such as attribute-level and tuple-level uncertainty. For an uncertain relation of N tuples, the processing cost is O(N logN)—no worse than simply sorting the relation. In settings where there is a high cost for generating each tuple in turn, we provide pruning techniques based on probabilistic tail bounds that can terminate the search early and guarantee that the top-k has been found. Finally, a comprehensive experimental study confirms the effectiveness of our approach.",
"Top-k processing in uncertain databases is semantically and computationally different from traditional top-k processing. The interplay between score and uncertainty makes traditional techniques inapplicable. We introduce new probabilistic formulations for top-k queries. Our formulations are based on \"marriage\" of traditional top-k semantics and possible worlds semantics. In the light of these formulations, we construct a framework that encapsulates a state space model and efficient query processing techniques to tackle the challenges of uncertain data settings. We prove that our techniques are optimal in terms of the number of accessed tuples and materialized search states. Our experiments show the efficiency of our techniques under different data distributions with orders of magnitude improvement over naive materialization of possible worlds.",
"This work introduces novel polynomial-time algorithms for processing top-k queries in uncertain databases, under the generally adopted model of x-relations. An x-relation consists of a number of x-tuples, and each x-tuple randomly instantiates into one tuple from one or more alternatives. Our results significantly improve the best known algorithms for top-k query processing in uncertain databases, in terms of both running time and memory usage. Focusing on the single-alternative case, the new algorithms are orders of magnitude faster."
]
}
|
1007.5240
|
2951994895
|
Several social-aware forwarding strategies have been recently introduced in opportunistic networks, and proved effective in considerably in- creasing routing performance through extensive simulation studies based on real-world data. However, this performance improvement comes at the expense of storing a considerable amount of state information (e.g, history of past encounters) at the nodes. Hence, whether the benefits on routing performance comes directly from the social-aware forwarding mechanism, or indirectly by the fact state information is exploited is not clear. Thus, the question of whether social-aware forwarding by itself is effective in improving opportunistic network routing performance remained unaddressed so far. In this paper, we give a first, positive answer to the above question, by investigating the expected message delivery time as the size of the net- work grows larger.
|
Performance analysis of opportunistic networks has been subject of intensive research in recent years. In particular, the analysis of routing performance -- expressed in terms of the expected message delivery time, as done in this paper -- has been considered in @cite_9 @cite_17 @cite_20 @cite_7 @cite_5 @cite_10 . More recently, also the distribution of the message delivery time has been studied @cite_11 . These studies assume a mobility model equivalent to one of the two-mobility models considered in this paper, namely the social-oblivious mobility model. Furthermore, they all consider social-oblivious routing protocols such as epidemic @cite_16 , two-hops @cite_3 , and BinarySW routing @cite_5 .
|
{
"cite_N": [
"@cite_11",
"@cite_7",
"@cite_9",
"@cite_3",
"@cite_5",
"@cite_16",
"@cite_10",
"@cite_20",
"@cite_17"
],
"mid": [
"2101946268",
"",
"2145594425",
"2149959815",
"2129849999",
"1572481965",
"2109528718",
"",
"2096509679"
],
"abstract": [
"In this paper, we present a framework for analyzing routing performance in delay tolerant networks (DTNs). Differently from previous work, our framework is aimed at characterizing the exact distribution of relevant performance metrics, which is a substantial improvement over existing studies characterizing either the expected value of the metric, or an asymptotic approximation of the actual distribution. In particular, the considered performance metrics are packet delivery delay, and communication cost, expressed as number of copies of a packet circulating in the network at the time of delivery. Our proposed framework is based on a characterization of the routing process as a stochastic coloring process and can be applied to model performance of most stateless delay tolerant routing protocols, such as epidemic, two-hops, and spray and wait. After introducing the framework, we present examples of its application to derive the packet delivery delay and communication cost distribution of two such protocols, namely epidemic and two-hops routing. Characterizing packet delivery delay and communication cost distribution is important to investigate fundamental properties of delay tolerant networks. As an example, we show how packet delivery delay distribution can be used to estimate how epidemic routing performance changes in presence of different degrees of node cooperation within the network. More specifically, we consider fully cooperative, noncooperative, and probabilistic cooperative scenarios, and derive nearly exact expressions of the packet delivery rate (PDR) under these scenarios based on our proposed framework. The comparison of the obtained packet delivery rate estimation in the various cooperation scenarios suggests that even a modest level of node cooperation (probabilistic cooperation with a low probability of cooperation) is sufficient to achieve 2-fold performance improvement with respect to the most pessimistic scenario in which all potential forwarders drop packets.",
"",
"Considered is a mobile ad hoc network consisting of three types of nodes (source, destination and relay nodes) and using the two-hop relay routing protocol. Packets at relay nodes are assumed to have a limited lifetime in the network. All nodes are moving inside a bounded region according to some random mobility model. Both closed-form expressions, and asymptotic results when the number of nodes is large, are provided for the packet delivery delay and the energy needed to transmit a packet from the source to its destination. We also introduce and evaluate a variant of the two-hop relay protocol that limits the number of generated copies in the network. Our model is validated through simulations for two mobility models (random waypoint and random direction mobility models), numerical results for the two-hop relay protocols are reported, and the performance of the two-hop routing and of the epidemic routing protocols are compared.",
"The capacity of ad hoc wireless networks is constrained by the mutual interference of concurrent transmissions between nodes. We study a model of an ad hoc network where n nodes communicate in random source-destination pairs. These nodes are assumed to be mobile. We examine the per-session throughput for applications with loose delay constraints, such that the topology changes over the time-scale of packet delivery. Under this assumption, the per-user throughput can increase dramatically when nodes are mobile rather than fixed. This improvement can be achieved by exploiting a form of multiuser diversity via packet relaying.",
"Intermittently connected mobile networks are wireless networks where most of the time there does not exist a complete path from the source to the destination. There are many real networks that follow this model, for example, wildlife tracking sensor networks, military networks, vehicular ad hoc networks, etc. In this context, conventional routing schemes fail, because they try to establish complete end-to-end paths, before any data is sent. To deal with such networks researchers have suggested to use flooding-based routing schemes. While flooding-based schemes have a high probability of delivery, they waste a lot of energy and suffer from severe contention which can significantly degrade their performance. Furthermore, proposed efforts to reduce the overhead of flooding-based schemes have often been plagued by large delays. With this in mind, we introduce a new family routing schemes that \"spray\" a few message copies into the network, and then route each copy independently towards the destination. We show that, if carefully designed, spray routing not only performs significantly fewer transmissions per message, but also has lower average delivery delays than existing schemes; furthermore, it is highly scalable and retains good performance under a large range of scenarios. Finally, we use our theoretical framework proposed in our 2004 paper to analyze the performance of spray routing. We also use this theory to show how to choose the number of copies to be sprayed and how to optimally distribute these copies to relays.",
"Mobile ad hoc routing protocols allow nodes with wireless adaptors to communicate with one another without any pre-existing network infrastructure. Existing ad hoc routing protocols, while robust to rapidly changing network topology, assume the presence of a connected path from source to destination. Given power limitations, the advent of short-range wireless networks, and the wide physical conditions over which ad hoc networks must be deployed, in some scenarios it is likely that this assumption is invalid. In this work, we develop techniques to deliver messages in the case where there is never a connected path from source to destination or when a network partition exists at the time a message is originated. To this end, we introduce Epidemic Routing, where random pair-wise exchanges of messages among mobile hosts ensure eventual message delivery. The goals of Epidemic Routing are to: i) maximize message delivery rate, ii) minimize message latency, and iii) minimize the total resources consumed in message delivery. Through an implementation in the Monarch simulator, we show that Epidemic Routing achieves eventual delivery of 100 of messages with reasonable aggregate resource consumption in a number of interesting scenarios.",
"In this paper, we develop a rigorous, unified framework based on ordinary differential equations (ODEs) to study epidemic routing and its variations. These ODEs can be derived as limits of Markovian models under a natural scaling as the number of nodes increases. While an analytical study of Markovian models is quite complex and numerical solution impractical for large networks, the corresponding ODE models yield closed-form expressions for several performance metrics of interest, and a numerical solution complexity that does not increase with the number of nodes. Using this ODE approach, we investigate how resources such as buffer space and the number of copies made for a packet can be traded for faster delivery, illustrating the differences among various forwarding and recovery schemes considered. We perform model validations through simulation studies. Finally we consider the effect of buffer management by complementing the forwarding models with Markovian and fluid buffer models.",
"",
"We study data transfer opportunities between wireless devices carried by humans. We observe that the distribution of the intercontact time (the time gap separating two contacts between the same pair of devices) may be well approximated by a power law over the range [10 minutes; 1 day]. This observation is confirmed using eight distinct experimental data sets. It is at odds with the exponential decay implied by the most commonly used mobility models. In this paper, we study how this newly uncovered characteristic of human mobility impacts one class of forwarding algorithms previously proposed. We use a simplified model based on the renewal theory to study how the parameters of the distribution impact the performance in terms of the delivery delay of these algorithms. We make recommendations for the design of well-founded opportunistic forwarding algorithms in the context of human-carried devices"
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.