abstract
stringlengths 5
11.1k
| authors
stringlengths 9
1.96k
⌀ | title
stringlengths 4
367
| __index_level_0__
int64 0
1,000k
|
---|---|---|---|
The emergence of new wireless technologies paved the way to the introduction of a new class of symmetric services with joint uplink/downlink quality of service requirements. In this paper, we formulate joint uplink/downlink resource allocation in OFDMA networks as an optimization problem with total sum-rate maximization as an objective function. A regularization term is added to the problem formulation to account for the coupling between the uplink and the downlink directions. This additional term controls the resource allocation via the minimization of the per-user difference between the uplink and downlink rates. We solve the formulated problem using dual optimization techniques. Performance results demonstrate the effectiveness of the proposed joint uplink/downlink allocation scheme compared to traditional single-link schemes. | ['Ahmad M. El-Hajj', 'Zaher Dawy'] | On optimized joint uplink/downlink resource allocation in OFDMA networks | 448,357 |
This paper describes a swarm control algorithm based on distributed pheromone maps to effectively track a moving fire front. The algorithm uses a single heuristic to maintain persistent surveillance to detect new fires, while concurrently allowing for tracking of known fires. Control of the swarm is fully distributed and self-contained; requiring no external communication infrastructure and being resilient to individual hardware failures. | ['David J. Howden'] | Fire tracking with collective intelligence using dynamic priority maps | 35,625 |
Machine Learning Made Easy with Sherlock. | ['Thomas Ginter', 'Olga V. Patterson', 'Ryan Cornia', 'Scott L. DuVall'] | Machine Learning Made Easy with Sherlock. | 983,286 |
We describe automated methods for constructing nonisomorphism proofs for pairs of graphs. The proofs can be human-readable or machine-readable. We have developed an experimental implementation of an interactive webpage producing a proof of (non)isomorphism when given two graphs. | ['Am Arjeh Cohen', 'Jw Jan Willem Knopper', 'Scott H. Murray'] | Automatic Proof of Graph Nonisomorphism | 426,932 |
The art and science of building ontologies have been developed to the point where it is not sufficient anymore to design and implement a new ontology. Rather, one needs to follow the process of building an ontology by evaluating its quality in absolute numeric terms. If another ontology in the same domain exists, then the two ontologies should be compared in a quantitative manner to determine which one of them is better. Furthermore, the quality scoring mechanism should provide clues concerning the sections of the ontology (one or both) that need improvement. Ontologies are complex structures which exist in many different variations. Even after imposing a basic structural framework and choosing a domain, two ontologies may be evaluated with respect to a number of different features. In this paper we will concentrate on one single ontology feature and assume that all other features are fixed. We have developed a mechanism to measure the quality of this ontology feature, preferred term(s) based on the concept of naturalness, and show that it agrees very well with human judgments. Thus we provide an approach towards the principled selection of the preferred terms in an ontology. | ['Soon Ae Chun', 'James Geller'] | Evaluating Ontologies Based on the Naturalness of Their Preferred Terms | 434,023 |
This paper describes a method of establishing dense matching of two views with large displacements. The problem addressed is formulated as the minimization of an energy functional that combines a similarity term and a smoothness term. The minimization of the energy functional reduces to solving a large system of nonlinear equations which is a discretized version of the Euler-Lagrange equation for the energy minimization problem at each image point. A dense displacement map is computed by solving the system of equations using a coarse-to-fine approach. The method has been successfully experimented with two applications: (1) computing binocular disparities from stereo pair images and (2) computing a dense displacement vector field (optical flow) from two views in a time-varying image sequence. > | ['Naokazu Yokoya'] | Dense matching of two views with large displacement | 260,555 |
We consider the problem of dynamically adjusting the formation and size of robot teams performing distributed area coverage, when they encounter obstacles or occlusions along their path. Based on our earlier formulation of the robotic team formation problem as a coalitional game called a weighted voting game (WVG), we show that the robot team size can be dynamically adapted by adjusting the WVG's quota parameter. We use a Q-learning algorithm to learn the value of the quota parameter and a policy reuse mechanism to adapt the learning process to changes in the underlying environment. Experimental results using simulated e-puck robots within the Webots simulator show that our Q-learning algorithm converges within a finite number of steps in different types of environments. Using the learning algorithm also improves the performance of an area coverage application where multiple robot teams move in formation to explore an initially unknown environment by 5−10%. | ['Prithviraj Dasgupta', 'Ke Cheng', 'Bikramjit Banerjee'] | Adaptive multi-robot team reconfiguration using a policy-reuse reinforcement learning approach | 101,980 |
Traditional distributed source coding rarely considers the possible link between separate encoders. However, the broadcast nature of wireless communication in sensor networks provides a free gossip mechanism which can be used to simplify encoding/decoding and reduce transmission power. Using this broadcast advantage, we present a new two-encoder scheme which imitates the ping-pong game and has a successive approximation structure. For the quadratic Gaussian case, we prove that this scheme is successively refinable on the {sum-rate, distortion pair} surface, which is characterized by the rate-distortion region of the distributed two-encoder source coding. A potential energy saving over conventional distributed coding is also illustrated. This ping-pong distributed coding idea can be extended to the multiple encoder case and provides the theoretical foundation for a new class of distributed image coding method in wireless scenarios. | ['Zichong Chen', 'Guillermo Barrenetxea', 'Martin Vetterli'] | Distributed successive approximation coding using broadcast advantage: The two-encoder case | 155,490 |
A classic result in the study of spanners is the existence of light low-stretch spanners for Euclidean spaces. These spanners have arbitrary low stretch, and weight only a constant factor greater than that of the minimum spanning tree of the points (with dependence on the stretch and Euclidean dimension). A central open problem in this field asks whether other spaces admit low weight spanners as well -- for example metric space with low intrinsic dimension -- yet only a handful of results of this type are known. In this paper, we consider snowflake metric spaces of low intrinsic dimension. The α-snowflake of a metric (X, Δ) is the metric (X, Δα), for 0 | ['Lee-Ad Gottlieb', 'Shay Solomon'] | Light spanners for Snowflake Metrics | 4,259 |
Towards an Accessible Personal Health Record | ['Ioannis Basdekis', 'Vangelis Sakkalis', 'Constantine Stephanidis'] | Towards an Accessible Personal Health Record | 790,179 |
The use of geographic data has become a widespread concern, mainly within applications related to spatial planning and spatial decision-making. Therefore, changing environments require databases adaptable to changes that occur over time. Thus, supporting geographic information evolution is essential and extremely important within changing environments. The evolution is expressed in the geographic database by series of update operations that should maintain its consistency. This paper proposes an approach for updating geographic databases, based on update operators and algorithms of constraints integrity checking. Temporal versioning is used to keep the track of changes. Every version presents the state of the geographic database at a given time. Algorithms of constraints integrity checking allow maintaining the database consistency upon its update. To implement our approach and assist users in the evolution process, the GeoVersioning tool is developed and tested on a sample geographic database. | ['Wassim Jaziri', 'Najla Sassi', 'Dhouha Damak'] | Using Temporal Versioning and Integrity Constraints for Updating Geographic Databases and Maintaining Their Consistency | 568,702 |
This papers proposes two novel approaches for the identification of Takagi-Sugeno fuzzy models with time variant and invariant features. The proposed Mixed Fuzzy Clustering algorithm is proposed for determining the parameters of Takagi-Sugeno fuzzy models in two different ways: (1) the antecedent fuzzy sets are determined based on the partition matrix generated by the Mixed Fuzzy Clustering algorithm; (2) the input features are transformed using the same algorithm and the antecedent fuzzy sets are derived using Fuzzy C-Means clustering. The proposed approaches are tested on four different health care applications: readmissions in intensive care units, administration of vasopressors and mortality. The results show that the proposed clustering algorithm resulted in an increase of the performance of the fuzzy models in three out of four applications in comparison to the use of Fuzzy C-Means. | ['Marta C. Ferreira', 'Cátia M. Salgado', 'Joaquim L. Viegas', 'Hanna Schafer', 'Carlos S. Azevedo', 'Susana M. Vieira', 'João Miguel da Costa Sousa'] | Fuzzy modeling based on Mixed Fuzzy Clustering for health care applications | 546,723 |
Thermally conductive adhesives are one of the major concerns of the contemporary micro-electronics. They are especially important in application where the effective heat dissipation is the key factor for reliability issues. Currently there is a lot of ongoing research in order to improve the basic thermal property of adhesives, which is mainly heat conductance. According to the literature data the heat conductance can vary from 0.1 up to 60 W/m·K. It depends on the filler material and its content and configuration but also on thermo-mechanical properties of matrix. Numerical simulation becomes nowadays an inevitable tool for rapid non-destructive and low-cost experiments. The basic problem of numerical experiments is accuracy. Nevertheless the error can be minimized by combining the numerical and traditional experiments. This can be achieved by means of partial validation of numerical results by traditional experiments or by precise and appropriate material properties measurement. In fact, the above approach was applied in current work in order to simulate the influence of curing temperature and time on the thermal conductance of polymers. Thermally conductive adhesives belong to polymer materials. In order to apply numerical simulation it is required to have an appropriate description of the thermal and mechanical behavior of polymers. Most often polymers are described by cure dependent or independent linear viscoelastic model [3, 5]. Having this model, which in fact can be measured experimentally, it is possible to simulate the stress and strain field caused by polymer curing and shrinkage phenomena and finally assess the thermal conductance accordingly. | ['Tomasz Falat', 'Artur Wymyslowski', 'Jana Kolbe'] | Numerical Approach to Characterization of Thermally Conductive Adhesives | 427,098 |
A new Gradient Projection Method is constructed. The optimization problem with an unequal constraint of joint velocity limit defined by infinity norm is solved, and its analytical form is derived to achieve the best optimization ability within a given joint velocity limit. Based on its optimization ability, and the concern of joint velocity limit, comparison with fixed scalar algorithm is made. A vision-based system is constructed and real time obstacle avoidance experiment is implemented with this new method. The results show that this new method is superior to other methods. It can automatically change the value of scalar k to make the robot obtain the best optimization ability, have no numerical oscillations and meet any specified joint velocity limit in terms of the infinity norm. | ['Yu Liu', 'Jing Zhao', 'Biyun Xie'] | Obstacle avoidance for redundant manipulators based on a Novel Gradient Projection Method with a functional scalar | 33,186 |
Massive multiple input, multiple output (M-MIMO) technologies have been proposed to scale up data rates reaching gigabits per second in the forthcoming 5G mobile communications systems. However, one of crucial constraints is a dimension in space to implement the M-MIMO. To cope with the space constraint and to utilize more flexibility in 3D beamforming (3D-BF), we propose antenna polarization in M-MIMO systems. In this paper, we design a polarized M-MIMO (PM-MIMO) system associated with 3D-BF applications, where the system architectures for diversity and multiplexing technologies achieved by polarized 3D beams are provided. Different from the conventional 3D-BF achieved by planar M-MIMO technology to control the downtilted beam in a vertical domain, the proposed PM-MIMO realizes 3D-BF via the linear combination of polarized beams. In addition, an effective array selection scheme is proposed to optimize the beam-width and to enhance system performance by the exploration of diversity and multiplexing gains; and a blind channel estimation (BCE) approach is also proposed to avoid pilot contamination in PM-MIMO. Based on the Long Term Evolution-Advanced (LTE-A) specification, the simulation results finally confirm the validity of our proposals. | ['Xin Su', 'KyungHi Chang'] | Diversity and Multiplexing Technologies by 3D Beams in Polarized Massive MIMO Systems | 594,933 |
The problem of parameter estimation of superimposed signals in white Gaussian noise is considered. Closed-form expressions of the Cramer-Rao bound for real or complex signals with vector parameters are derived, extending recent results by P. Stoica and A. Nehorai (1989). > | ['Sze Fong Yau', 'Yoram Bresler'] | A compact Cramer-Rao bound expression for parametric estimation of superimposed signals | 80,923 |
Trust as one of important social relations has attracted much attention from researchers in the field of social network-based recommender systems. In trust network-based recommender systems, there exist normally two roles for users, truster and trustee. Most of trust-based methods generally utilize explicit links between truster and trustee to find similar neighbors for recommendation. However, there possibly exist implicit correlations between users, especially for users with the same role (truster or trustee). In this paper, we propose a novel Collaborative Filtering method called CF-TC, which exploits Trust Context to discover implicit correlation between users with the same role for recommendation. In this method, each user is first represented by the same-role users who are co-occurring with the user. Then, similarities between users with the same role are measured based on obtained user representation. Finally, two variants of our method are proposed to fuse these computed similarities into traditional collaborative filtering for rating prediction. Using two publicly available real-world Epinions and Ciao datasets, we conduct comprehensive experiments to compare the performance of our proposed method with some existing benchmark methods. The results show that CF-TC outperforms other baseline methods in terms of RMSE, MAE, and recall. | ['Haifeng Liu', 'Zhuo Yang', 'Jun Zhang', 'Xiaomei Bai', 'Wei Wang', 'Feng Xia'] | Mining Implicit Correlations between Users with the Same Role for Trust-Aware Recommendation | 696,793 |
In this paper we study the t-branch split cuts introduced by Li and Richard (Discret Optim 5:724–734, 2008). They presented a family of mixed-integer programs with n integer variables and a single continuous variable and conjectured that the convex hull of integer solutions for any n has unbounded rank with respect to (n−1)-branch split cuts. It was shown earlier by Cook et al. (Math Program 47:155–174, 1990) that this conjecture is true when n = 2, and Li and Richard proved the conjecture when n = 3. In this paper we show that this conjecture is also true for all n > 3. | ['Sanjeeb Dash', 'Oktay Günlük'] | On t-branch split cuts for mixed-integer programs | 51,616 |
Parametric modeling and estimation of non-Gaussian multidimensional probability density function is a difficult problem whose solution is required by many applications in signal and image processing. A lot of efforts have been devoted to escape the usual Gaussian assumption by developing perturbed Gaussian models such as spherically invariant random vectors (SIRVs). In this work, we introduce an alternative solution based on copulas that enables theoretically to represent any multivariate distribution. Estimation procedures are proposed for some mixtures of copula-based densities and are compared in the hidden Markov chain setting, in order to perform statistical unsupervised classification of signals or images. Useful copulas and SIRV for multivariate signal classification are particularly studied through experiments. | ['Nicolas Brunel', 'Jérôme Lapuyade-Lahorgue', 'Wojciech Pieczynski'] | Modeling and Unsupervised Classification of Multivariate Hidden Markov Chains With Copulas | 333,036 |
Summary: Aw eb server has been established for the statistical evaluation of introns in various taxonomic groups and the comparison of taxonomic groups in terms of intron type, length, base composition, etc. The options include the graphic analysis of splice sites and a probability test fo re xon-shuffling within the selected group. Availability: introns.abc.hu, http://www.icgeb.trieste.it/ introns | ['Endre Barta', 'László Kaján', 'Sándor Pongor'] | IS: a web-site for intron statistics | 328,584 |
We present a new approach to the stability analysis of finite receding horizon control applied to constrained linear systems. By relating the final predicted state to the current state through a bound on the terminal cost, it is shown that knowledge of upper and lower bounds for the finite horizon costs is sufficient to determine the stability of a receding horizon controller. This analysis is valid for receding horizon schemes with arbitrary positive-definite terminal weights and does not rely on the use of stabilizing constraints. The result is a computable test for stability, and two simple examples are used to illustrate its application. | ['James A. Primbs', 'Vesna Nevistić'] | A new approach to stability analysis for constrained finite receding horizon control without end constraints | 532,494 |
The emergence of distributed speech recognition has generated the need to mitigate the degradations that the transmission channel introduces in the speech features used for recognition. This work proposes a hidden Markov model (HMM) framework from which different mitigation techniques oriented to wireless channels can be derived. First, we study the performance of two techniques based on the use of a minimum mean square error (MMSE) esti- mation, a raw MMSE and a forward MMSE estimation, over additive white Gaussian noise (AWGN) channels. These techniques are also adapted to bursty channels. Then, we propose two new mitigation methods specially suitable for bursty channels. The first one is based on a forward-backward MMSE estimation and the second one on the well- known Viterbi algorithm. Different experiments are carried out, dealing with several issues such as the application of hard decisions on the received bits or the influence of the estimated channel SNR. The experimental results show that the HMM-based techniques can effectively mitigate channel errors, even in very poor channel conditions. 2003 Elsevier B.V. All rights reserved. | ['Antonio M. Peinado', 'Victoria E. Sánchez', 'José L. Pérez-Córdoba', 'Ángel de la Torre'] | HMM-based channel error mitigation and its application to distributed speech recognition ☆ | 16,194 |
Deep Learning for Facial Keypoints Detection | ['Mikko Haavisto', 'Arto Kaarna', 'Lasse Lensu'] | Deep Learning for Facial Keypoints Detection | 786,496 |
Order Effects in Online Product Recommendation: A Scenario-based Analysis | ['Xunhua Guo', 'Mingyue Zhang', 'Chenyue Yang', 'Guoqing Chen'] | Order Effects in Online Product Recommendation: A Scenario-based Analysis | 904,961 |
Given a planar triangulation, a 3-orientation is an orientation of the internal edges so all internal vertices have out-degree three. Each 3-orientation gives rise to a unique edge coloring known as a Schnyder wood that has proven powerful for various computing and combinatorics applications. We consider natural Markov chains for sampling uniformly from the set of 3-orientations. First, we study a “triangle-reversing” chain on the space of 3-orientations of a fixed triangulation that reverses the orientation of the edges around a triangle in each move. We show that, when restricted to planar triangulations of maximum degree six, this Markov chain is rapidly mixing and we can approximately count 3-orientations. Next, we construct a triangulation with high degree on which this Markov chain mixes slowly. Finally, we consider an “edge-flipping” chain on the larger state space consisting of 3-orientations of all planar triangulations on a fixed number of vertices. We prove that this chain is always rapidly mixing. | ['Sarah Miracle', 'Dana Randall', 'Amanda Pascoe Streib', 'Prasad Tetali'] | Sampling and Counting 3-Orientations of Planar Triangulations | 717,166 |
This paper presents a high-performance sum of absolute difference (SAD) architecture for motion estimation, which is the most time-consuming and compute-intensive part of video coding. The proposed architecture contains novel and efficient optimizations to overcome bottlenecks discovered in existing approaches. In addition, designed sophisticated control logic with multiple early termination mechanisms further enhance execution speed and make the architecture suitable for general-purpose usage. Hence, the proposed architecture is not restricted to a single block-matching algorithm in motion estimation, but a wide range of algorithms is supported. The proposed SAD architecture outperforms contemporary architectures in terms of execution speed and area efficiency. The proposed architecture with three pipeline stages, synthesized to a 0.18-mum CMOS technology, can attain 770-MHz operating frequency at a cost of less than 5600 gates. Correspondingly, performance metrics for the proposed low-latency 2-stage architecture are 730 MHz and 7500 gates | ['Jarno Vanne', 'Eero Aho', 'Timo D. Hämäläinen', 'Kimmo Kuusilinna'] | A High-Performance Sum of Absolute Difference Implementation for Motion Estimation | 157,282 |
Security is difficult to achieve on general-purpose computing platforms due to their complexity, excess functionality, and resource sharing. An alternative is the creation of a Tailored Trustworthy Space for the system or application class of interest. We focus on data-intensive computing systems using reconfigurable hardware to implement streaming operations, and provide security assurances that are independent of application software, middleware, or operating system integrity and correctness. All interaction between software and the dataflow hardware passes through an automatically synthesized and formally verified hardware controller incorporating enforcement and real-time monitoring of application-specific rules. Abstractions provided by the Blue spec high-level language assist in the translation of domain-specific policy rules to synthesized logic. For the cognitive radio example used, hardware-enforced policies include physical layer rules such as sanctioned spectrum usage. Policy changes cause the secure generation and transfer of a new controller-wrapped datapath hardware plug-in. Datapath dynamic block swaps and cryptographic operations are managed entirely by the hardware controller rather than software drivers. Design for performance and design for security are therefore simultaneously addressed since the datapath is configured and monitored at hardware speeds, and software has no access to datapath configurations and cryptographic keys. | ['Mohammed M. Farag', 'Lee W. Lerner', 'Cameron D. Patterson'] | Thwarting Software Attacks on Data-Intensive Platforms with Configurable Hardware-Assisted Application Rule Enforcement | 343,204 |
This paper presents a fuzzy control approach that guarantees absolute delays in web servers. Previous work has proposed the use of classical Proportional-Integral (PI) controllers for delay guarantees. However, a disadvantage of the classical PI controller is that the system model, which is obtained by system identification, mismatches the real system and inevitably degrades the performance of the web system. In contrast with classical PI controllers, fuzzy controllers are non-linear and therefore independent of the accurate model of the plant, i.e., the controlled system. Hence, fuzzy controllers seem to be very suitable for web servers. Our experiments show that fuzzy controllers indeed perform better than PI controllers presented in earlier papers. | ['Yaya Wei', 'Chuang Lin', 'Xiaowen Chu', 'Thiemo Voigt'] | Fuzzy control for guaranteeing absolute delays in web servers | 498,880 |
Processing Emergent Features in Metaphor Comprehension | ['Asuka Terai', 'Robert L. Goldstone'] | Processing Emergent Features in Metaphor Comprehension | 996,850 |
This paper describes a new hierarchical approach to content-based image retrieval called the "customized-queries" approach (CQA). Contrary to the single feature vector approach which tries to classify the query and retrieve similar images in one step, CQA uses multiple feature sets and a two-step approach to retrieval. The first step classifies the query according to the class labels of the images using the features that best discriminate the classes. The second step then retrieves the most similar images within the predicted class using the features customized to distinguish "subclasses" within that class. Needing to find the customized feature subset for each class led us to investigate feature selection for unsupervised learning. As a result, we developed a new algorithm called FSSEM (feature subset selection using expectation-maximization clustering). We applied our approach to a database of high resolution computed tomography lung images and show that CQA radically improves the retrieval precision over the single feature vector approach. To determine whether our CBIR system is helpful to physicians, we conducted an evaluation trial with eight radiologists. The results show that our system using CQA retrieval doubled the doctors' diagnostic accuracy. | ['Jennifer G. Dy', 'Carla E. Brodley', 'Avinash C. Kak', 'Lynn S. Broderick', 'Alex M. Aisen'] | Unsupervised feature selection applied to content-based retrieval of lung images | 204,061 |
FlexCCT: Software for Maximum Likelihood Cultural Consensus Theory. | ['Stephen L. France', 'Mahyar Sharif Vaghefi', 'William H. Batchelder'] | FlexCCT: Software for Maximum Likelihood Cultural Consensus Theory. | 743,063 |
Broadband Internet access in mobile hotspots (e.g. public transport vehicles) through high-speed on-board local area networks and mobile routers is becoming an increasingly popular area of research and development. Mobile hotspot operators can provide faster, cheaper, and more stable communication services to on-board passengers using the multi-homing technique; whereby the mobile router is connected to a diverse array of wireless access technologies (e.g., GPRS, UMTS, 802.11) through a multiplicity of wireless service providers. As the set of available access networks may change frequently during each trip, the challenge for the mobile hotspot operators is to decide on how to "best" distribute the user data traffic among the multiple access networks. In this paper, we proposed a new business model and a user traffic distribution algorithm that aims to maximize the profit for the mobile hotspot operator while providing an acceptable level of service. We also provide results from a detailed simulation study of this algorithm under various usage probability distributions. | ['Albert Yuen Tai Chung', 'Mahbub Hassan'] | Optimizing profit and performance for multi-homed mobile hotspots | 201,417 |
Editorial: Special Issue on Web Data Quality | ['Christian Bizer', 'Luna Dong', 'Ihab F. Ilyas', 'Maria-Esther Vidal'] | Editorial: Special Issue on Web Data Quality | 951,758 |
The annual International Web Rule Symposium (RuleML) is an international conference on research, applications, languages, and standards for rule technologies. It has evolved from an annual series of international workshops since 2002, international conferences in 2005 and 2006, and international symposia since 2007. It is the flagship event of the Rule Markup and Modeling Initiative (RuleML, http://ruleml.org), a nonprofit umbrella organization of several technical groups from academia, industry, and government working on rule technology and its applications. RuleML is the leading conference to build bridges between academia and industry in the field of rules and its applications, especially as part of the semantic technology stack. It is devoted to rule-based programming and rule-based systems including production rules systems, logic programming rule engines, and business rules engines/business rules management systems; Semantic Web rule languages and rule standards (e.g., RuleML, SWRL, RIF, PRR, SBVR, DMN, CL, Prolog); rule-based event processing languages and technologies; and research on inference rules, transformation rules, decision rules, production rules, and ECA rules. | ['Antonis Bikakis', 'Paul Fodor', 'Adrian Giurca', 'Leora Morgenstern'] | Introduction to the special issue on the International Web Rule Symposia 2012–2014 | 704,455 |
Traditionally, control systems use ad hoc techniques such as shared internal data structures, to store control data. However, due to the increasing data volume in control systems, these internal data structures become increasingly difficult to maintain. A real-time database management system can provide an efficient and uniform way to structure and access data. However the drawback with database management systems is the overhead added when accessing data. In this paper we introduce a new concept called database pointers, which provides fast and deterministic accesses to data in hard real-time database management systems compared to traditional database management systems. The concept is especially beneficial for hard real-time control systems where many control tasks each use few data elements at high frequencies. Database pointers can co-reside with a relational data model, and any updates made from the database pointer interface are immediately visible from the relational view. We show the efficiency with our approach by comparing it to tuple identifiers and relational processing. | ['Dag Nyström', 'Aleksandra Tesanovic', 'Christer Norström', 'Jörgen Hansson'] | Database pointers: A predictable way of manipulating hot data in hard real-time systems | 603,508 |
Self-organizing ad hoc and sensor networks require the capability of nodes to discover other devices in their neighborhood. This operation must be performed rapidly and in a transparent way with respect to applications running in the network. Neighbor discovery consists of a set of procedures a node has to perform which involve the consumption of energy resources which are very scarce in most self-organizing ad hoc scenarios. In this paper an analytical framework based on Markov chains is introduced for the modeling of neighbor discovery. This framework allows us to evaluate the energy cost due to the hunting process and the probability that a timely discovery will occur. The above two performance measures clash with each other and thus an appropriate tradeoff is required. In this context, the proposed paradigm can be used for the performance evaluation and design of hunting processes. | ['Laura Galluccio', 'Alessandro Leonardi', 'Giacomo Morabito', 'Sergio Palazzo'] | Tradeoff between Energy-Efficiency and Timeliness of Neighbor Discovery in Self-Organizing Ad Hoc and Sensor Networks | 494,984 |
We investigate the practical questions of building a self-organizing robot swarm, using the iRobot Roomba cleaning robot as an experimental platform. Our goal is to employ self-organization for enhancing the cleaning efficiency of a Roomba swarm. The implementation uses RFID tags both for object and location-based task recognition as well as graffiti- or stigmata-style communication between robots.Easily modifiable rule systems are used for object ontologies and automatic task generation. Long-term planning and central coordination are avoided. | ['Tanel Tammet', 'Jüri Vain', 'Andres Puusepp', 'Enar Reilent', 'Alar Kuusik'] | RFID-based Communications for a Self-Organising Robot Swarm | 440,013 |
Long-Lasting Changes in Muscle Twitch Force during Simulated Work while Standing or Walking | ['Maria Gabriela Garcia', 'Rudolf Wall', 'Benjamin Steinhilber', 'Thomas Läubli', 'Bernard J. Martin'] | Long-Lasting Changes in Muscle Twitch Force during Simulated Work while Standing or Walking | 864,297 |
The multi-agent systems community has made great strides investigating issues such as coordination and negotiation. When addressing human or human-agent behavior, very few approaches have addressed a feature that people are embodied in the real-world and act in geospatial environments. In the past, it has been difficult to perform experiments and collect data for such domains. However, with the spread of mobile technology that can run sophisticated applications and return location-based data, we are now in a position to investigate such questions. | ['Spencer Frazier', 'Alex Newnan', 'Yu-Han Chang', 'Rajiv T. Maheswaran'] | Team-It: location-based mobile games for multi-agent coordination and negotiation (demonstration) | 684,928 |
Unhandled exceptions crash programs, so a compile-time check that exceptions are handled should in principle make software more reliable. But designers of some recent languages have argued that the benefits of statically checked exceptions are not worth the costs. We introduce a new statically checked exception mechanism that addresses the problems with existing checked-exception mechanisms. In particular, it interacts well with higher-order functions and other design patterns. The key insight is that whether an exception should be treated as a "checked" exception is not a property of its type but rather of the context in which the exception propagates. Statically checked exceptions can "tunnel" through code that is oblivious to their presence, but the type system nevertheless checks that these exceptions are handled. Further, exceptions can be tunneled without being accidentally caught, by expanding the space of exception identifiers to identify the exception-handling context. The resulting mechanism is expressive and syntactically light, and can be implemented efficiently. We demonstrate the expressiveness of the mechanism using significant codebases and evaluate its performance. We have implemented this new exception mechanism as part of the new Genus programming language, but the mechanism could equally well be applied to other programming languages. | ['Yizhou Zhang', 'Guido Salvaneschi', 'Quinn Beightol', 'Barbara Liskov', 'Andrew C. Myers'] | Accepting blame for safe tunneled exceptions | 810,727 |
Testing NAND flash memories is a very complex issue due to the rapid scaling down of the technology and the related floating gate reliability issues: as a consequence a complete and technology independent test is needed. Several faults and disturbances were identified both for NOR and NAND flash memories: however they has never been considered together as a whole. In this work we analyze all the possible fault models for NAND flash memories: thus we define a comprehensive and technology independent fault model for NAND Flash memories, for which a simple but comprehensive test method is presented. | ['Stefano Di Carlo', 'Michele Fabiano', 'Roberto Piazza', 'Paolo Ernesto Prinetto'] | Exploring modeling and testing of NAND flash memories | 158,355 |
Objective: Subcellular-sized chronically implanted recording electrodes have demonstrated significant improvement in single unit (SU) yield over larger recording probes. Additional work expands on this initial success by combining the subcellular fiber-like lattice structures with the design space versatility of silicon microfabrication to further improve the signal-to-noise ratio, density of electrodes, and stability of recorded units over months to years. However, ultrasmall microelectrodes present very high impedance, which must be lowered for SU recordings. While poly(3,4-ethylenedioxythiophene) (PEDOT) doped with polystyrene sulfonate (PSS) coating have demonstrated great success in acute to early-chronic studies for lowering the electrode impedance, concern exists over long-term stability. Here, we demonstrate a new blend of PEDOT doped with carboxyl functionalized multiwalled carbon nanotubes (CNTs), which shows dramatic improvement over the traditional PEDOT/PSS formula. Methods: Lattice style subcellular electrode arrays were fabricated using previously established method. PEDOT was polymerized with carboxylic acid functionalized carbon nanotubes onto high-impedance (8.0 ± 0.1 MΩ: M ± S.E.) 250-μm 2 gold recording sites. Results: PEDOT/CNT-coated subcellular electrodes demonstrated significant improvement in chronic spike recording stability over four months compared to PEDOT/PSS recording sites. Conclusion: These results demonstrate great promise for subcellular-sized recording and stimulation electrodes and long-term stability. Significance: This project uses leading-edge biomaterials to develop chronic neural probes that are small (subcellular) with excellent electrical properties for stable long-term recordings. High-density ultrasmall electrodes combined with advanced electrode surface modification are likely to make significant contributions to the development of long-term (permanent), high quality, and selective neural interfaces. | ['Takashi D.Y. Kozai', 'Kasey Catt', 'Zhanhong Du', 'Kyounghwan Na', 'Onnop Srivannavit', 'Razi Ul M Haque', 'John P. Seymour', 'Kensall D. Wise', 'Euisik Yoon', 'Xinyan Tracy Cui'] | Chronic In Vivo Evaluation of PEDOT/CNT for Stable Neural Recordings | 589,454 |
The increasing volume of short texts generated on social media sites, such as Twitter or Facebook, creates a great demand for effective and efficient topic modeling approaches. While latent Dirichlet allocation (LDA) can be applied, it is not optimal due to its weakness in handling short texts with fast-changing topics and scalability concerns. In this paper, we propose a transfer learning approach that utilizes abundant labeled documents from other domains (such as Yahoo! News or Wikipedia) to improve topic modeling, with better model fitting and result interpretation. Specifically, we develop Transfer Hierarchical LDA (thLDA) model, which incorporates the label information from other domains via informative priors. In addition, we develop a parallel implementation of our model for large-scale applications. We demonstrate the effectiveness of our thLDA model on both a microblogging dataset and standard text collections including AP and RCV1 datasets. | ['Jeon-Hyung Kang', 'Jun Ma', 'Yan Liu'] | Transfer Topic Modeling with Ease and Scalability | 253,079 |
Bayesian nonparametric methods based on the Dirichlet process (DP), gamma process and beta process, have proven effective in capturing aspects of various datasets arising in machine learning. However, it is now recognized that such processes have their limitations in terms of the ability to capture power law behavior. As such there is now considerable interest in models based on the Stable Processs (SP), Generalized Gamma process (GGP) and Stable-beta process (SBP). These models present new challenges in terms of practical statistical implementation. In analogy to tractable processes such as the finite-dimensional Dirichlet process, we describe a class of random processes, we call iid finite-dimensional BFRY processes, that enables one to begin to develop efficient posterior inference algorithms such as variational Bayes that readily scale to massive datasets. For illustrative purposes, we describe a simple variational Bayes algorithm for normalized SP mixture models, and demonstrate its usefulness with experiments on synthetic and real-world datasets. | ['Juho Lee', 'Lancelot F. James', 'Seungjin Choi'] | Finite-Dimensional BFRY Priors and Variational Bayesian Inference for Power Law Models | 938,411 |
We consider Black-Box continuous optimization by Estimation of Distribution Algorithms (EDA). In continuous EDA, the multivariate Gaussian distribution is widely used as a search operator, and it has the well-known ad- vantage of modelling the correlation structure of the search variables, which univariate EDA lacks. However, the Gaussian distribution as a search operator is prone to premature convergence when the population is far from the optimum. Recent work suggests that replacing the univariate Gaussian with a univariate Cauchy distribution in EDA holds promise in alleviating this problem because it is able to make larger jumps in the search space due to the Cauchy distribution's heavy tails. In this paper, we propose the use of a multivariate Cauchy distribu- tion to blend together the advantages of multivariate modelling with the ability of escaping early convergence to efficiently explore the search space. Experi- ments on 16 benchmark functions demonstrate the superiority of multivariate Cauchy EDA against univariate Cauchy EDA, and its advantages against multi- variate Gaussian EDA when the population lies far from the optimum. | ['Momodou L. Sanyang', 'Ata Kabán'] | Multivariate Cauchy EDA Optimisation | 631,916 |
Due to their monolithic construction and superior wear and loss properties, flexure joints have been used to reduce the mechanism size and increase the positioning accuracy. The compliance of flexure joints, however, can affect the static and dynamic characteristics of the overall mechanism. To design mechanisms containing flexure joints, we have proposed a multi-objective optimization approach to take into account the multitude of performance metrics and design constraints. A Pareto frontier is first computed, and secondary design criteria, such as sensitivity and dynamic characteristics, are then applied to select the final design. To reduce the computation load and facilitate design iteration, a lumped spring approximation, the Paros-Weisbord model, is used to characterize the flexure joints and the pseudo-rigid-body model is used as an approximate description of mechanisms. This paper presents this approach applied to the design of a micro-gripper. The performance metrics are chosen to be the manipulability of the gripper opening and the decoupling of a stiffness matrix (reflecting the remote center of compliance criterion). Different design and initial fabrication results are included. | ['Byoung Hun Kang', 'John T. Wen'] | Design of Compliant MEMS Grippers for Micro-Assembly Tasks | 188,260 |
MEDLEY is an alignment method based on lexical and structural treatments. This method includes a specific technique to deal with multilingual ontologies. This paper introduces MEDLEY and summarizes the results for OAEI 2012. | ['Walid Hassen'] | Medley results for OAEI 2012 | 793,580 |
Gradient adjusted predictor (GAP), used in CALIC, consists of seven slope bins and one predictor each is associated with these bins. As the relationship between the predicted pixels and their contexts are complex, these predictors may not be appropriate for prediction of the pixels belonging to the respective slope bins. In this work, we present the least-squares (LS) based approach to find optimal predictors for pixels belonging to various slope bins of GAP. Our simulation results show that the proposed method results in similar performance as that of edge directed prediction (EDP) and Run-length and Adaptive Linear Predictive (RALP) coding. EDP and RALP use symmetrical encoder and decoder structure. On the other hand, we propose an unsymmetrical codec that has higher encoding complexity but decoder is very fast - as fast as a decoder based on GAP principle. However, our encoder is computationally much simpler than an EDP and RALP based encoders. | ['Anil Kumar Tiwari', 'Ratnam V. Raja Kumar'] | Least squares based optimal switched predictors for lossless compression of images | 439,390 |
Combinatorial Testing: From Algorithms to Applications | ['Angelo Michele Gargantini', 'Rachel Tzoref-Brill'] | Combinatorial Testing: From Algorithms to Applications | 859,198 |
Optimization of Centrifugal Impeller Using Evolutionary Strategies and Artificial Neural Networks | ['René Meier', 'Franz Joos'] | Optimization of Centrifugal Impeller Using Evolutionary Strategies and Artificial Neural Networks | 194,283 |
In Dynamic Broadcast it is assumed that user terminals provide an interface to both a broadcast network and a broadband network. In addition, they are expected to be equipped with a storage device. The user terminals are capable of composing linear TV services of individual content items, which are either being received live via the broadcast or the broadband network or which are played back from a local storage device if they have been received before. Multiple content delivery mechanisms are supported, including live- and pre-transmission of TV programs. Inside the user terminals, a combined usage of one or more broadcast tuners, the broadband interface, and the local storage device is therefore required. In this paper, we discuss how the interaction of these devices can be managed in order to allow for the live and time-shifted delivery of TV content via dynamically changing distribution channels. Further, a demonstrator of a user terminal is presented, which allows the interruption-free presentation of TV services in a Dynamic Broadcast environment, where live as well as time-shifted content delivery mechanisms are in use. | ['Peter Neumann', 'Ulrich Reimers'] | Live and time-shifted content delivery for dynamic broadcast: Terminal aspects | 309,048 |
Application Programming Interface (API) documents are a typical way of describing legal usage of reusable software libraries, thus facilitating software reuse. However, even with such documents, developers often overlook some documents and build software systems that are inconsistent with the legal usage of those libraries. Existing software verification tools require formal specifications (such as code contracts), and therefore cannot directly verify the legal usage described in natural language text of API documents against the code using that library. However, in practice, most libraries do not come with formal specifications, thus hindering tool-based verification. To address this issue, we propose a novel approach to infer formal specifications from natural language text of API documents. Our evaluation results show that our approach achieves an average of 92% precision and 93% recall in identifying sentences that describe code contracts from more than 2500 sentences of API documents. Furthermore, our results show that our approach has an average 83% accuracy in inferring specifications from over 1600 sentences describing code contracts. | ['Rahul Pandita', 'Xusheng Xiao', 'Hao Zhong', 'Tao Xie', 'Stephen Oney', 'Amit M. Paradkar'] | Inferring method specifications from natural language API descriptions | 63,538 |
We show that inductive logic programming(ILP) is a powerful tool for spatial data mining. We further develop the direction started (or symbolised) by GeoMiner [9] and argue that the technique developed for database schema design in deductive object-oriented databases is fully usable for spatial mining and overcome, in expressive power, some other mining methods. An inductive query language, with richer semantics, is proposed and three kinds of inductive queries are described. Two of them are improved versions of DBMiner [8] rules. The third kind of rules, dependency rules, allow to compare two or more subsets. Then a description of GWiM mining system as well as results reached by the system are given. We conclude with discussion of weaknesses of the method. | ['Lubos Popelínsky'] | Knowledge Discovery in Spatial Data by Means of ILP | 359,997 |
This paper proposes a method to increase the impedance range of admittance-type haptic interfaces. Admittance-type haptic interfaces are used in various applications that typically require interaction with high impedance virtual environments. However, the performance of admittance haptic interfaces is often judged by the lower boundary of the impedance that can be achieved without stability problem; in particular, minimum displayable inertia. It is well known that rendering the low value of inertia makes the admittance-type haptic interfaces unstable easily. This paper extends Time Domain Passivity Approach (TDPA) to lower down the minimum achievable inertia in an Admittance-type haptic interface. To use the well-developed TDPA framework, an admittance haptic interface should be represented in network domain with clear energy flows, which was not straightforward due to unclear causality. Therefore by introducing dependent effort and flow source concept, the admittance type haptic interface is represented in electrical network domain. This network representation allows us to have clear causality, and consequently allowing to implement TDPA. The proposed idea was experimentally verified, and found successful in bringing down the minimum inertia 10 times lower than without TDPA case. | ['Muhammad Nabeel', 'JaeJun Lee', 'Usman Mehmood', 'Aghil Jafari', 'Jung-Hoon Hwang', 'Jee-Hwan Ryu'] | Increasing the impedance range of admittance-type haptic interfaces by using Time Domain Passivity Approach | 581,612 |
Unforeseen events such as node failures and resource contention can have a severe impact on the performance of data processing frameworks, such as Hadoop, especially in cloud environments where such incidents are common. SLA compliance in the presence of such events requires the ability to quickly and dynamically resize infrastructure resources. Unfortunately, the distributed and stateful nature of data processing frameworks makes it challenging to accurately scale the system at run-time. In this paper, we present the design and implementation of a model-driven autoscaling solution for Hadoop clusters. We first develop novel gray-box performance models for Hadoop workloads that specifically relate job execution times to resource allocation and workload parameters. We then employ these models to dynamically determine the resources required to successfully complete the Hadoop jobs as per the user-specified SLA under various scenarios including node failures and multi-job executions. Our experimental results on three different Hadoop cloud clusters and across different workloads demonstrate the efficacy of our models and highlight their autoscaling capabilities. | ['Anshul Gandhi', 'Sidhartha Thota', 'Parijat Dube', 'Andrzej Kochut', 'Li Zhang'] | Autoscaling for Hadoop Clusters | 815,702 |
In the next generation cellular networks, device-to-device (D2D) communication is already considered a fundamental feature. A problem of multi-hop D2D networks is on how to define forwarding algorithms that achieve, at the same time, high delivery ratio and low network overhead. In this paper we aim to understand group meetings' properties by looking at their structure and regularity with the final goal of applying such knowledge in the design of a forwarding algorithm for D2D multi-hop networks. We introduce a forwarding protocol, namely GROUPS-NET, which is aware of social group meetings and their evolution over time. Our algorithm is parameter-calibration free and does not require any knowledge about the social network structure of the system. In particular, different from the state of the art algorithms, GROUPS-NET does not need communities detection, which is a complex and expensive task. We validate our algorithm using different publicly available data-sources. In real large scale scenarios, our algorithm achieves approximately the same delivery ratio of the state-of-art solution with up to 40% less network overhead. | ['Ivan Oliveira Nunes', 'Clayson Celes', 'Pedro O. S. Vaz de Melo', 'Antonio Alfredo Ferreira Loureiro'] | GROUPS-NET: Group Meetings Aware Routing in Multi-Hop D2D Networks | 766,828 |
Cognitive radios enable opportunistic transmission for secondary users (SUs) without interfering the primary user (PU). Cyclo-stationary-based spectrum sensing methods are better than the energy detection methods in negative signal-to-noise (SNR) decibel (dB) regime, in which case the noise variance cannot be exactly estimated. However, blind cyclo-stationary methods require a large number of symbols (and hence measurements). This paper aims to reduce the number of measurements in a blind sensing method (using a combination of linear prediction and QR decomposition), by employing compressed sensing at the receiver front-end, so as to reduce the A/D requirements needed with a large number of measurements, along with oversampling the received signal. Till now, compressed sensing has not been investigated at very low negative SNR (dB), e.g., $$-$$-12 dB, which is very crucial in spectrum sensing. The novel algorithm, in this paper, overcomes this shortcoming, and its simulation results show that the SU is able to detect the PU signal, using much less measurements, even at very low negative SNR (dB). The proposed method also investigates the effect of joint and individual measurement matrices at multiple oversampled branches. | ['Parthapratim De', 'Udit Satija'] | Sparse Representation for Blind Spectrum Sensing in Cognitive Radio: A Compressed Sensing Approach | 642,225 |
Automated Activity Recognition in Clinical Documents | ['Camilo Thorne', 'Marco Montali', 'Diego Calvanese', 'Elena Cardillo', 'Claudio Eccher'] | Automated Activity Recognition in Clinical Documents | 616,449 |
This study strengthens the links between Mean Payoff Games (\MPG{s}) and Energy Games (EG{s}). Firstly, we offer a faster $O(|V|^2|E|W)$ pseudo-polynomial time and $\Theta(|V|+|E|)$ space deterministic algorithm for solving the Value Problem and Optimal Strategy Synthesis in \MPG{s}. This improves the best previously known estimates on the pseudo-polynomial time complexity to: \[ O(|E|\log |V|) + \Theta\Big(\sum_{v\in V}\texttt{deg}_{\Gamma}(v)\cdot\ell_{\Gamma}(v)\Big) = O(|V|^2|E|W), \] where $\ell_{\Gamma}(v)$ counts the number of times that a certain energy-lifting operator $\delta(\cdot, v)$ is applied to any $v\in V$, along a certain sequence of Value-Iterations on reweighted \EG{s}; and $\texttt{deg}_{\Gamma}(v)$ is the degree of $v$. This improves significantly over a previously known pseudo-polynomial time estimate, i.e. $\Theta\big(|V|^2|E|W + \sum_{v\in V}\texttt{deg}_{\Gamma}(v)\cdot\ell_{\Gamma}(v)\big)$ \citep{CR15, CR16}, as the pseudo-polynomiality is now confined to depend solely on $\ell_\Gamma$. Secondly, we further explore on the relationship between Optimal Positional Strategies (OPSs) in \MPG{s} and Small Energy-Progress Measures (SEPMs) in reweighted \EG{s}. It is observed that the space of all OPSs, $\texttt{opt}_{\Gamma}\Sigma^M_0$, admits a unique complete decomposition in terms of extremal-SEPM{s} in reweighted EG{s}. This points out what we called the "Energy-Lattice $\mathcal{X}^*_{\Gamma}$ associated to $\texttt{opt}_{\Gamma}\Sigma^M_0$". Finally, it is offered a pseudo-polynomial total-time recursive procedure for enumerating (w/o repetitions) all the elements of $\mathcal{X}^*_{\Gamma}$, and for computing the corresponding partitioning of $\texttt{opt}_{\Gamma}\Sigma^M_0$. | ['Carlo Comin', 'Romeo Rizzi'] | Faster O(|V|^2|E|W)-Time Energy Algorithms for Optimal Strategy Synthesis in Mean Payoff Games | 883,629 |
Many highly sophisticated tools exist for solving linear arithmetic optimization and feasibility problems. Here we analyze why it is difficult to use these tools inside systems for SAT Modulo Theories (SMT) for linear arithmetic: one needs support for disequalities, strict inequalities and,more importantly, for dealingwith incorrect results due to the internal use of imprecise floating-point arithmetic.We explain how these problems can be overcome by means of result checking and error recovery policies.#R##N##R##N#Second, by means of carefully designed experiments with, among other tools, the newest version of ILOG CPLEX and our own new Barcelogic T-solver for arithmetic, we show that, interestingly, the cost of result checking is only a small fraction of the total T-solver time.#R##N##R##N#Third, we report on extensive experiments running exactly the same SMT search using CPLEX and Barcelogic as T-solvers, where CPLEX tends to be slower than Barcelogic. We analyze these at first sight surprising results, explaining why tools such as CPLEX are not very adequate (nor designed) for this kind of relatively small incremental problems.#R##N##R##N#Finally, we show how our result checking techniques can still be very useful in combination with inexact floating-point-based T-solvers designed for incremental SMT problems. | ['Germain Faure', 'Robert Nieuwenhuis', 'Albert Oliveras', 'Enric Rodríguez-Carbonell'] | SAT modulo the theory of linear arithmetic: exact, inexact and commercial solvers | 11,722 |
In this paper, we propose an effective algorithm based on Extreme Learning Machine (ELM) for salient object detection. First, saliency maps generated by existing methods are taken as prior maps, from which training samples are collected for an ELM classifier. Second, the ELM classifier is learned to detect the salient regions, and the final results are generated by fusing multi-scale saliency maps. This ELM-based model can improve the performance of different state-of-the-art methods to a large degree. Furthermore, we present an integration mechanism to take advantages of superiorities of multiple saliency maps. Extensive experiments on five datasets demonstrate that our method performs well and the significant improvement can be achieved when applying our model to existing saliency approaches. | ['Lu Zhang', 'Jianhua Li', 'Huchuan Lu'] | Saliency detection via extreme learning machine | 866,538 |
Several works proposed methods to make video streaming scalable over the number of clients, avoiding the linear growth of bandwidth requirements for the media source node. Some are based on overlay networks built on top of the IP protocol and distribute content between overlay partners. In this way the clients share their bandwidth, reducing the burden on the source node. Similar to data-oriented proposals, this work breaks the media into segments which are requested from partners when available. Novel in this technique is the explicit handling of losses with a selective retransmission mechanism based on H.264 content, controlled by estimated decoding importance of packets. | ['Carlos Lenz', 'Lau Cheuk Lung', 'Frank Siqueira'] | SeRViSO: a selective retransmission scheme for video streaming in overlay networks | 249,604 |
Beyond fun | ['John M. Carroll'] | Beyond fun | 714,120 |
Dynamic hand gesture recognition is a crucial but challenging task in the pattern recognition and computer vision communities. In this paper, we propose a novel feature vector which is suitable for representing dynamic hand gestures, and presents a satisfactory solution to recognizing dynamic hand gestures with a Leap Motion controller (LMC) only. These have not been reported in other papers. The feature vector with depth information is computed and fed into the Hidden Conditional Neural Field (HCNF) classifier to recognize dynamic hand gestures. The systematic framework of the proposed method includes two main steps: feature extraction and classification with the HCNF classifier. The proposed method is evaluated on two dynamic hand gesture datasets with frames acquired with a LMC. The recognition accuracy is 89.5% for the LeapMotion-Gesture3D dataset and 95.0% for the Handicraft-Gesture dataset. Experimental results show that the proposed method is suitable for certain dynamic hand gesture recognition tasks. | ['Wei Lu', 'Zheng Tong', 'Jinghui Chu'] | Dynamic Hand Gesture Recognition With Leap Motion Controller | 831,460 |
In this paper, we propose a method to derive and model data uncertainty from imprecise data. We view data imprecision and errors as the outcome of the precise data exposed to some uncertain channels, and our scheme is to directly derive the data uncertainty model from imprecise data, such that the derived data uncertainty information may be integrated into the succeeding mining process. To achieve the goal, we propose an Expectation Maximization (EM) based approach to detect erroneous data entries from the input data. The data uncertainty models are constructed by applying statistical analysis to the detected errors. Experimental results show that the proposed error detection approach can locate data errors and suggest alternative data entry values to improve classifiers built from imprecise data. In addition, the uncertain models derived for each individual attributes are shown to be close to the genuine uncertainty models used to corrupt the data. | ['Dan He', 'Xing-Quan Zhu', 'Xindong Wu'] | Error Detection and Uncertainty Modeling for Imprecise Data | 156,332 |
In this paper, we derive bounds on the structural similarity (SSIM) index as a function of quantization rate for fixed-rate uniform quantization of image discrete cosine transform (DCT) coefficients under the high rate assumption. The space domain SSIM index is first expressed in terms of the DCT coefficients of the space domain vectors. The transform domain SSIM Index is then used to derive bounds on the average SSIM index as a function of quantization rate for Gaussian and Laplacian sources. As an illustrative example, uniform quantization of the DCT coefficients of natural images is considered. We show that the SSIM index between the reference and quantized images fall within the bounds for a large set of natural images. Further, we show using a simple example that the proposed bounds could be very useful for rate allocation problems in practical image and video coding applications. | ['Sumohana S. Channappayya', 'Alan C. Bovik', 'W Robert Heath', 'Constantine Caramanis'] | Rate Bounds on SSIM Index of Quantized Image DCT Coefficients | 54,680 |
Today, Smartphones are very powerful, and many of its applications use wireless multimedia communications. Prevention from the external dangers (threats) has become a big concern for the experts these days. Android security has become a very important issue because of the free application it provides and the feature which make it very easy for anyone to develop and published it on Play store. Some work has already been done on the android security model, including several analyses of the model and frameworks aimed at enforcing security standards. In this article, we introduce a tool called PermisSecure that is able to perform both static and dynamic analysis on Android programs to automatically detect suspicious applications that request unnecessary or dangerous permissions. | ['E Latifa', 'El Kiram My Ahmed'] | A New Protection for Android Applications | 738,829 |
This paper considers the basic PULL model of communication, in which in each round, each agent extracts information from few randomly chosen agents. We seek to identify the smallest amount of information revealed in each interaction (message size) that nevertheless allows for efficient and robust computations of fundamental information dissemination tasks. We focus on the Majority Bit Dissemination problem that considers a population of n agents, with a designated subset of source agents. Each source agent holds an input bit and each agent holds an output bit. The goal is to let all agents converge their output bits on the most frequent input bit of the sources (the majority bit ). Note that the particular case of a single source agent corresponds to the classical problem of Broadcast (also termed Rumor Spreading ). We concentrate on the severe fault-tolerant context of self-stabilization , in which a correct configuration must be reached eventually, despite all agents starting the execution with arbitrary initial states. In particular, the specification of who is a source and what is its initial input bit may be set by an adversary. We first design a general compiler which can essentially transform any self-stabilizing algorithm with a certain property (called "the bitwise-independence property ") that uses l -bits messages to one that uses only log l -bits messages, while paying only a small penalty in the running time. By applying this compiler recursively we then obtain a self-stabilizing Clock Synchronization protocol, in which agents synchronize their clocks modulo some given integer T , within O (log n log T ) rounds w.h.p., and using messages that contain 3 bits only. We then employ the new Clock Synchronization tool to obtain a self-stabilizing Majority Bit Dissemination protocol which converges in O (log n ) time, w.h.p., on every initial configuration, provided that the ratio of sources supporting the minority opinion is bounded away from half. Moreover, this protocol also uses only 3 bits per interaction. | ['Lucas Boczkowski', 'Amos Korman', 'Emanuele Natale'] | Minimizing message size in stochastic communication patterns: fast self-stabilizing protocols with 3 bits | 852,345 |
With ongoing healthcare payment reforms in the USA, radiology is moving from its current state of a revenue generating department to a new reality of a cost-center. Under bundled payment methods, radiology does not get reimbursed for each and every inpatient procedure, but rather, the hospital gets reimbursed for the entire hospital stay under an applicable diagnosis-related group code. The hospital case mix index (CMI) metric, as defined by the Centers for Medicare and Medicaid Services, has a significant impact on how much hospitals get reimbursed for an inpatient stay. Oftentimes, patients with the highest disease acuity are treated in tertiary care radiology departments. Therefore, the average hospital CMI based on the entire inpatient population may not be adequate to determine department-level resource utilization, such as the number of technologists and nurses, as case length and staffing intensity gets quite high for sicker patients. In this study, we determine CMI for the overall radiology department in a tertiary care setting based on inpatients undergoing radiology procedures. Between April and September 2015, CMI for radiology was 1.93. With an average of 2.81, interventional neuroradiology had the highest CMI out of the ten radiology sections. CMI was consistently higher across seven of the radiology sections than the average hospital CMI of 1.81. Our results suggest that inpatients undergoing radiology procedures were on average more complex in this hospital setting during the time period considered. This finding is relevant for accurate calculation of labor analytics and other predictive resource utilization tools. | ['Thusitha Dananjaya De Silva Mabotuwana', 'Christopher S. Hall', 'Sebastian Flacke', 'Shiby Thomas', 'Christoph Wald'] | Inpatient Complexity in Radiology-a Practical Application of the Case Mix Index Metric. | 990,917 |
An algorithm able to compute both the numerical values and the approximate symbolic expressions of poles and zeros of a circuit function is analyzed. At least for the given example, the numerical values obtained with this algorithm prove to be more accurate than those computed by HSPICE, SPECTRE and SAPWIN. The approximate symbolic pole expressions are also computed for this example. A comparison with known algorithms for the computation of the approximate pole/zero expressions as ANALOG INSYDES and SYMBA is performed. It follows that the symbolic LR algorithm, used in the proposed program, performs less drastic simplifications than the procedures used in ANALOG INSYDES and SYMBA. Difficulties arising in the computation of the approximate pole/zero expressions are discussed as well as some future developments in this area. | ['Alexandru Gabriel Gheorghe', 'Florin Constantinescu'] | Pole/Zero Computation for Linear Circuits | 923,981 |
This paper presents an FPGA accelerated power estimation methodology for a Cadence Tensilica Xtensa LX5 ASIP. Based on hybrid functional level (FLPA) and instruction level power analysis (ILPA), the model can be mapped onto an FPGA together with the functional emulation. This enables fast and accurate estimation of application-specific power consumption and energy per task at early design stages which is crucial for power-aware design of instruction set extensions. The approach allows both hardware and software designers to optimize their implementations for power efficiency. The methodology for the ASIP and considerations for FPGA implementation are described and validated against GTL power simulation on different benchmarks. Results yield a %MAE of less than 7.0% and NRMSE of less than 6.9%. Finally, instruction set extensions for traffic sign detection are evaluated on real-world image sizes. It is shown that performance is improved by 11.2x while still reducing required energy by 10.5x. | ['Sebastian Hesselbarth', 'Gregor Schewior', 'Holger Blume'] | Fast and accurate power estimation for application-specific instruction set processors using FPGA emulation | 578,269 |
Summary form only given, as follows. One of the central questions in topology is determining whether a given curve is knotted or unknotted. An algorithm to decide this question was given by Haken (1961), using the technique of normal surfaces. These surfaces are rigid, discretized surfaces, well suited for algorithmic analysis. Any oriented surface without boundary can be obtained from a sphere by adding "handles". The number of handles is called the genus of the surface, and the smallest genus of a spanning surface for a curve is called the genus of the curve. A curve has genus zero if and only if it is unknotted. Schubert extended Haken's work, giving an algorithm to determine the genus of a curve in any 3-manifold. We examine the problem of deciding whether a polygonal knot in a closed triangulated three-dimensional manifold bounds a surface of genus at most g, 3-MANIFOLD KNOT GENUS. Previous work of Hass, Lagarias and Pippenger had shown that this problem is in PSPACE. No lower bounds on the running time were previously known. We show that this problem is NP-complete. | ['Ian Agol', 'Joel Hass', 'William P. Thurston'] | 3-MANIFOLD KNOT GENUS is NP-complete | 517,622 |
Due to intensified globalization of supply networks and growing e-commerce activities, logistics service providers have to deal with steadily increasing shipment volumes. Highly performing transshipment terminals have been identified as an essential basis to handle those volumes within transportation networks. In recent years, internal sorting processes have already been the focus of analysis, standardization and optimization. In contrast to that, yard management in the terminals is still operated with very limited automated intelligence. Due to the fact that performance of internal sorting operations can only be achieved by constantly high input flows, an enhanced efficiency of yard operations is the main challenge to increase the performance of transshipment terminals. Therefore a simulation method for yard operations in terminals has been developed which allows detailed analysis. Furthermore, it has been applied on an exemplary terminal and different controlling strategies have been tested concerning their impact on performance aspects. | ['Uwe Clausen', 'Ina Goedicke'] | Simulation of yard operations and management in transshipment terminals | 360,242 |
In this paper, we describe our investigations of the use of scalable vector graphics as a genotype representation in evolutionary art. We describe the technical aspects of using SVG in evolutionary art, and explain our custom, SVG specific operators initialisation, mutation and crossover. We perform two series of experiments; in the first series of experiments, we investigate the feasibility of SVG as a genotype representation for evolutionary art, and evolve abstract images using a number of aesthetic measures as fitness functions. In the second series of experiments, we used existing images as source material. We also designed and implemented an ad-hoc aesthetic measure for 'pop-art' and used this to evolve images that are visually similar to pop-art. All experiments described in this paper are done without a human in the loop. All images and SVG code examples in this paper are available at http://www.eelcodenheijer.nl/research. | ['Eelco den Heijer', 'A. E. Eiben'] | Using scalable vector graphics to evolve art | 354,448 |
Drummers are required to learn the correct stroking order to play drums efficiently. However, general musical scores for drums do not indicate the annotation which drummers use to stroke each drum with a left hand or a right hand. Therefore, drum teachers have to handwrite such annotation to musical score or use the soft application. There is not musical score generation system for drums that indicates hitting hands. In this research, we proposed the system that generates musical score that indicates the hitting hand for drum performance. Our proposed STICK TRACK recognizes the hitting hand on the basis of the data of a gyro sensor that are embedded in the drumsticks and MIDI message from an electronic drum. We constructed the prototype system and evaluated its effectiveness. | ['Hiroyuki Kanke', 'Tsutomu Terada', 'Masahiko Tsukamoto'] | STICK TRACK: a Musical Score Generation System for Drums Considering Hitting Hand | 935,722 |
This paper presents a novel control methodology for the tracking control of a high-order continuous time nonlinear systems with unknown dynamics and external disturbance. The control signal consists of the robust integral of the sign of the error (RISE) feedback signal multiplied with an adaptive gain plus neural network (NN) output. The two-layer NN learns the system dynamics in an online manner while residual reconstruction errors and the external bounded system disturbances are overcome by the RISE signal. Semi-global asymptotic tracking performance is theoretically guaranteed by using the Lyapunov standard method, while the NN weights and all other signals are shown to be bounded. Further, simulations results are present to illustrate the control performance. | ['Qinmin Yang', 'Sarangapani Jagannathan', 'Youxian Sun'] | NN/RISE-based asymptotic tracking control of uncertain nonlinear systems | 180,518 |
This paper describes a singing design method based on morphing, the design and development of an intuitive interface to assist morphing-based singing design. The proposed interface has a function for real-time morphing, based on simple operation with a mouse, and an editor to control the singing features in detail. The user is able to enhance singing voices efficiently by using these two functions. In this paper, we discuss the requirement for an interface to assist in morphing-based singing design, and develope an interface to fulfill the requirement. | ['Masanori Morise', 'Masato Onishi', 'Hideki Kawahara', 'Haruhiro Katayose'] | v.morish'09: A Morphing-Based Singing Design Interface for Vocal Melodies | 121,615 |
Multi-language, Multi-target Compiler Development: Evolution of the Gardens Point Compiler Project | ['K. John Gough'] | Multi-language, Multi-target Compiler Development: Evolution of the Gardens Point Compiler Project | 39,491 |
Analogue VLSI can be used to implement spike timing dependent neuromorphic training algorithms. This work presents a circuitry that uses spike timing to "adapt out" the effects of device mismatch in such circuits. Simulation results for the circuit implemented in 0.35 /spl mu/m CMOS process are reported. | ['Katherine Cameron', 'Alan Murray'] | Can spike timing dependent plasticity compensate for process mismatch in neuromorphic analogue VLSI | 463,197 |
Describes efficient algorithms for accurately estimating the number of matches of a small node-labeled tree, i.e. a twig, in a large node-labeled tree, using a summary data structure. This problem is of interest for queries on XML and other hierarchical data, to provide query feedback and for cost-based query optimization. Our summary data structure scalably represents approximate frequency information about twiglets (i.e. small twigs) in the data tree. Given a twig query, the number of matches is estimated by creating a set of query twiglets, and combining two complementary approaches: set hashing, used to estimate the number of matches of each query twiglet, and maximal overlap, used to combine the query twiglet estimates into an estimate for the twig query. We propose several estimation algorithms that apply these approaches on query twiglets formed using variations on different twiglet decomposition techniques. We present an extensive experimental evaluation using several real XML data sets, with a variety of twig queries. Our results demonstrate that accurate and robust estimates can be achieved, even with limited space. | ['Zhiyuan Chen', 'H. V. Jagadish', 'Flip Korn', 'Nick Koudas', 'S. Muthukrishnan', 'Raymond T. Ng', 'Divesh Srivastava'] | Counting twig matches in a tree | 384,321 |
In an excitable thin-layer Belousov-Zhabotinsky (BZ) medium a localized perturbation leads to the formation of omnidirectional target or spiral waves of excitation. A subexcitable BZ medium responds to asymmetric local perturbation by producing traveling localized excitation wave-fragments, distant relatives of dissipative solitons. The size and life span of an excitation wave-fragment depend on the illumination level of the medium. Under the right conditions the wave-fragments conserve their shape and velocity vectors for extended time periods. I interpret the wave-fragments as values of Boolean variables. When two or more wave-fragments collide they annihilate or merge into a new wave-fragment. States of the logic variables, represented by the wave-fragments, are changed in the result of the collision between the wave-fragments. Thus, a logical gate is implemented. Several theoretical designs and experimental laboratory implementations of Boolean logic gates have been proposed in the past but little has been done cascading the gates into binary arithmetical circuits. I propose a unique design of a binary one-bit full adder based on a fusion gate. A fusion gate is a two-input three-output logical device which calculates the conjunction of the input variables and the conjunction of one input variable with the negation of another input variable. The gate is made of three channels: two channels cross each other at an angle, a third channel starts at the junction. The channels contain a BZ medium. When two excitation wave-fragments, traveling towards each other along input channels, collide at the junction they merge into a single wave-front traveling along the third channel. If there is just one wave-front in the input channel, the front continues its propagation undisturbed. I make a one-bit full adder by cascading two fusion gates. I show how to cascade the adder blocks into a many-bit full adder. I evaluate the feasibility of my designs by simulating the evolution of excitation in the gates and adders using the numerical integration of Oregonator equations. | ['Andrew Adamatzky'] | Binary full adder, made of fusion gates, in sub-excitable Belousov-Zhabotinsky system | 581,548 |
Reconstruction of Super Resolution High Dynamic Range Image from Multiple-Exposure Images. | ['Tae-Hyoung Lee', 'Ho-Gun Ha', 'Yeong-Ho Ha'] | Reconstruction of Super Resolution High Dynamic Range Image from Multiple-Exposure Images. | 791,519 |
Web application internationalization frameworks allow businesses to more easily market and sell their products and services around the world. However, internationalization can lead to problems. Text expansion and contraction after translation may result in a distortion of the layout of the translated versions of a webpage, which can reduce their usability and aesthetics. In this paper, we investigate and report on the frequency and severity of different types of failures in webpages' user interfaces that are due to internationalization. In our study, we analyzed 449 real world internationalized webpages. Our results showed that internationalization failures occur frequently and they range significantly in terms of severity and impact on the web applications. These findings motivate and guide future work in this area. | ['Abdulmajeed Alameer', 'William G. J. Halfond'] | An Empirical Study of Internationalization Failures in the Web | 843,562 |
In order to facilitate the development of agent-based software, several agent programming languages and architectures, have been created. Plans in these architectures are often self-contained procedures with an associated triggering event and a context condition , while any further information about the consequences of executing a plan is absent. However, agents designed using such an approach have limited flexibility at runtime, and rely on the designer's ability to foresee all relevant situations an agent might have to handle. In order to overcome this limitation, we have created AgentSpeak(PL), an interpreter capable of performing state-space planning to generate new high-level plans. As the planning module creates new plans, the plan library is expanded, improving performance over time. However, for new plans to be useful in the long run, it is critical that the context conditions associated with new plans are carefully generated. In this paper we describe a plan reuse technique aimed at improving an agent's runtime performance by deriving optimal context conditions for new plans, allowing an agent to reuse generated plans as much as possible. | ['Felipe Rech Meneguzzi', 'Michael Luck'] | Leveraging New Plans in AgentSpeak(PL) | 383,231 |
In view of the defect of traditional water quality evaluation model, based on fuzzy neural network theory, a new model of fuzzy neural network (FNN) comprehensive evaluation is developed to evaluate surface water quality in Suzhou. Fuzzy neural network is a new type neural network consisting Radical Basis network and compete neural network, which is simple in structure, easy for training and wide used. FNN model is applied to evaluate water quality at representative sections in Suzhou surface area from the year 1999-2002. The results indicate that FNN model is suitable for water quality evaluation. By analysis, it is important to pay attention to bring into effective measures for pollution control. | ['Changjun Zhu', 'Zhenchun Hao'] | Fuzzy Neural Network Model and Its Application in Water Quality Evaluation | 296,461 |
The network management area is currently dealing with huge amounts of information, which are produced, for example, by large scale and high-speed networks, heterogeneous devices, and monitoring and notification systems. Researchers and network administrators are frequently supported by information visualization techniques in the task of analyzing these large data sets. The simple network management protocol (SNMP) is the de facto standard for TCP/IP networks management. Despite its importance, there are no specific visualizations defined for SNMP traffic traces. In this paper we present a study on techniques for visualizing SNMP trace files, motivated by the fact that general purpose network traffic visualizations available today are not suitable for SNMP observation. Our proposed techniques have been prototyped in a software tool called management traffic analyzer, which has been used to analyze and visualize SNMP traces. | ['Ewerton Monteiro Salvador', 'Lisandro Zambenedetti Granville'] | Using visualization techniques for SNMP traffic analyses | 519,581 |
We examined students' definition of correctness as reflected by their decisions whether certain programs are correct. Using a questionnaire we found that students understand correctness as a relative property of the program and therefore might decide that a program is correct even when they evidence its incorrect behavior. We also found that students' definitions of systematic testing are inherently different from that of professionals, yet are consistent with their tolerance to errors. | ['Yifat Ben-David Kolikant'] | Students' alternative standards for correctness | 216,979 |
In years, several algorithms for mining frequent subgraphs in graph databases have been proposed, with a major application area being the discovery of frequent substructures of biomolecules. Unfortunately, most of these algorithms still struggle with fairly long execution times if larger substructures or molecular fragments are desired. We describe two advanced pruning strategies - equivalent sibling pruning and perfect extension pruning - that can be used to speed up the MoFa algorithm (introduced in C. Borgelt and M.R. Berthold, (2002)) in the search for closed molecular fragments, as we demonstrate with experiments on the NCI's HIV database. | ['Christian Borgelt', 'Thorsten Meinl', 'Michael R. Berthold'] | Advanced pruning strategies to speed up mining closed molecular fragments | 6,209 |
Graph semi-supervised learning (GSSL) is a technique that uses a combination of labeled and unlabeled nodes on a graph to determine a classifier for new, incoming data. This problem can be analyzed through the lens of graph signal processing. In particular, the penalty functions used in the optimization formulation of standard GSSL algorithms can be interpreted as appropriately-defined filters in the Graph Fourier domain. We propose a wavelet-regularized semi-supervised learning algorithm using suitably-defined spline-like graph wavelets. These wavelets are critically-sampled, perfect-reconstruction basis representations, in contrast to much of the existing work proposing overcomplete representations. Critical sampling is essential for controlling the complexity in applications dealing with large scale datasets. We are also interested in understanding when wavelet-regularized approaches perform better than traditional Fourier-based regularizers. We compare the performance of our proposed spline-like, wavelet-regularized learning algorithm (as well as other existing graph wavelet designs) to some standard graph semi-supervised learning techniques on synthetic and real-world datasets. | ['Venkatesan N. Ekambaram', 'Giulia C. Fanti', 'Babak Ayazifar', 'Kannan Ramchandran'] | Wavelet-regularized graph semi-supervised learning | 917,427 |
Automatic generation of repeated patient information for tailoring clinical notes. | ['Frank Meng', 'Ricky K. Taira', 'Alex A. T. Bui', 'Hooshang Kangarloo', 'Bernard M. Churchill'] | Automatic generation of repeated patient information for tailoring clinical notes. | 810,351 |
Continuum manipulators have virtually infinite degrees of freedom (DOF) and are therefore capable of highly dexterous motions. This paper studies the forward and inverse kinematic problems for these types of manipulators. The presented kinematic model utilizes multiple serially connected segments to mimic the continuum morphology. A spline interpolation method is used to generate the backbone curve of the manipulator, and an inverse control strategy is developed to relate the manipulator position and orientation to actuator inputs. A robotic continuum manipulator using pneumatic muscle actuators (PMA) is then constructed to evaluate the model and control strategy. Simulation and experiment results show that the presented methods are able to control the continuum manipulator to perform some stereotyped motions. | ['Rongjie Kang', 'Emanuele Guglielmino', 'David T. Branson', 'Darwin G. Caldwell'] | Kinematic model and inverse control for continuum manipulators | 29,918 |
In this paper, delay constrained performance of a multiple-input multiple-output (MIMO) communication system in a dense environment with co-channel interference is investigated. We apply orthogonal space-time block coding (OSTBC) at the transmitter, and for alleviating the high complexity and cost of the MIMO system, receive antenna selection (RAS) scheme is employed in the downlink. Here, for simple and cheap mobile handsets, one antenna is chosen at the receiver in each utilization of the channel. Under these assumptions, a maximum constant arrival rate with the delay quality-of-service guarantee in a wireless channel is extracted. We obtain a closed-form solution for the effective capacity of the MIMO–OSTBC channel with the RAS scheme in a quasi-static Rayleigh fading conditions and co-channel interference. After all, the numerical simulations are provided and verified the theoretical results. | ['Mohammad Lari'] | Effective capacity of receive antenna selection MIMO–OSTBC systems in co-channel interference | 628,298 |
With the rapid advance of the social media, the challenge is to develop new techniques and standards to measure the influence of people or brands in the online social networks. Each website has its way of ranking the display of the most influential members of its virtual society. However, most of the current measurement methods are incomplete and one-dimensional. This paper presents a new measurement model, W-entropy, which has been developed based on information theory to study the influence of individuals based on different social networks. The model was tested using data from Facebook, Twitter, YouTube, and Google search. The proposed model can be extended to other platforms. To evaluate the effectiveness, the developed method was compared with Famecount ranking using the same data with different weight distributions. The result shows that W-entropy method is suitable for index ranking to reflect uneven information distribution across various social networks. | ['Li Weigang', 'Zheng Jianya', 'Guiqiu Liu'] | W-entropy method to measure the influence of the members from social networks | 6,594 |
Recognizing Visual Categories with Symbol-Relational Grammars and Bayesian Networks | ['Elías Ruiz', 'L. Enrique Sucar'] | Recognizing Visual Categories with Symbol-Relational Grammars and Bayesian Networks | 661,779 |
Unprecedented growth in the interdisciplinary domain of biomedical informatics reflects the recent advancements in genomic sequence availability, high-content biotechnology screening systems, as well as the expectations of computational biology to command a leading role in drug discovery and disease characterization. These forces have moved much of life sciences research almost completely into the computational domain. Importantly, educational training in biomedical informatics has been limited to students enrolled in the life sciences curricula, yet much of the skills needed to succeed in biomedical informatics involve or augment training in information technology curricula. This manuscript describes the methods and rationale for training students enrolled in information technology curricula in the field of biomedical informatics, which augments the existing information technology curriculum and provides training on specific subjects in Biomedical Informatics not emphasized in bioinformatics courses offered in life science programs, and does not require prerequisite courses in the life sciences. | ['Michael D. Kane', 'Jeffrey L. Brewer'] | An information technology emphasis in biomedical informatics education | 68,712 |
Design and Implementation of P2P Streaming Systems for Webcast | ['Yusuke Gotoh', 'Kentaro Suzuki', 'Tomoki Yoshihisa', 'Masanori Kanazawa'] | Design and Implementation of P2P Streaming Systems for Webcast | 335,762 |
Analyses of large simulation data often concentrate on regions in space and in time that contain important information. As simulations adopt Adaptive Mesh Refinement (AMR), the data records from a region of interest could be widely scattered on storage devices and accessing interesting regions results in significantly reduced I/O performance. In this work, we study the organization of block-structured AMR data on storage to improve performance of spatio-temporal data accesses. AMR has a complex hierarchical multi-resolution data structure that does not fit easily with the existing approaches that focus on uniform mesh data. To enable efficient AMR read accesses, we develop an in situ data layout optimization framework. Our framework automatically selects from a set of candidate layouts based on a performance model, and reorganizes the data before writing to storage. We evaluate this framework with three AMR datasets and access patterns derived from scientific applications. Our performance model is able to identify the best layout scheme and yields up to a 3X read performance improvement compared to the original layout. Though it is not possible to turn all read accesses into contiguous reads, we are able to achieve 90% of contiguous read throughput with the optimized layouts on average. | ['Houjun Tang', 'Suren Byna', 'Steve Harenberg', 'Wenzhao Zhang', 'Xiaocheng Zou', 'Daniel F. Martin', 'Bin Dong', 'Dharshi Devendran', 'Kesheng Wu', 'David Trebotich', 'Scott Klasky', 'Nagiza F. Samatova'] | In Situ Storage Layout Optimization for AMR Spatio-temporal Read Accesses | 894,520 |
The vision of smart cities is in the usage of ICT and web technologies to connect and monitor different elements of the urban space such as buildings, power and transport networks. Thus, the aim is to identify new solutions with the purpose of improving sustainability and livability. Mobile and participatory sensing systems play a big role. Using smartphones as a tool for social impact, it is possible to take advantage of their sensing and communication capabilities to monitor the environment. Even though this kind of system seems to be "centralized", the data acquisition by mobile devices, the analysis in data centers and the user centered results produce a system that is, in essence, a distributed system. In this article we present our participatory sensing system: BeCity. BeCity takes advantage of the collective knowledge of transportation cyclists to improve the quality of city cycling. It provides refined (and up-to-date) city usage information for cycling associations and government entities. Moreover, the distilled information is returned to the users, by recommending bike-friendly route planning. Our contribution is twofold. First, the distributed system by itself. Second, the algorithm to suggest bike routes which considers both the shortest and the most popular route. In many cases, the recommendation algorithm performs better suggestions than the one used by Google Maps. | ['Salomon Torres', 'Felipe Lalanne', 'Gabriel Del Canto', 'Fernando Morales', 'Javier Bustos-Jiménez', 'Patricio Reyes'] | BeCity: sensing and sensibility on urban cycling for smarter cities | 660,859 |
Streamline modeling is a design methodology of fair free form surfaces where the tangent vectors are specified/manipulated to generate/deform them instead of the control points of the traditional surface representations (K.T. Miura et al., 1998). Creations and deformations of complex 3D free form shapes generally require a large amount of labour and cost. Therefore, the authors propose a novel modeling technique based on of fluid flow dynamics in order to create and deform high-quality surfaces more intuitively with a smaller number of parameters based on the streamline modeling. We construct a flow field based on the potential flow, then calculate tangent vectors in the flow field that are required for the streamline modeling. Streamlines are generated by numerically integrating the tangent vectors, then the surface is represented as a set of streamlines. The deformation of the surface is performed by changing of the flow field. We have developed a prototype design system using the new modeling technique. | ['Yasuhiro Suzuki', 'Kenjiro T. Miura', 'Ichiro Tanaka', 'Hiroshi Masuda'] | Streamline modeling based on potential flow | 89,799 |
We study the problem of global predicate detection in presence of permanent and transient failures. We term the transient failures as small faults. We show that it is impossible to detect predicates in an asynchronous distributed system prone to small faults even if nodes are equipped with a powerful device known as failure detector sequencer (denoted by Σ). To redress this impossibility, we introduce a theoretical device, known as a small fault sequencer (denoted by ΣSF), and show that ΣSF is necessary and sufficient for predicate detection. Unfortunately, we also show that ΣSF cannot be implemented even in a synchronous distributed system. Fortunately, however, we show that predicate detection can be achieved with high probability in synchronous systems. | ['Felix C. Freiling', 'Arshad Jhumka'] | Global predicate detection in distributed systems with small faults | 373,373 |
Subsets and Splits