reference
stringlengths 141
444k
| target
stringlengths 31
68k
|
---|---|
A Review of Theory and Practice in Scientometrics <s> Indicators of Impact: Citations <s> Citations support the communication of specialist knowledge by allowing authors and readers to make specific selections in several contexts at the same time. In the interactions between the social network of (first-order) authors and the network of their reflexive (that is, second-order) communications, a sub-textual code of communication with a distributed character has emerged. The recursive operation of this dual-layered network induces the perception of a cognitive dimension in scientific communication.Citation analysis reflects on citation practices. Reference lists are aggregated in scientometric analysis using one (or sometimes two) of the available contexts to reduce the complexity: geometrical representations (‘mappings’) of dynamic operations are reflected in corresponding theories of citation. For example, a sociological interpretation of citations can be distinguished from an information-theoretical one. The specific contexts represented in the modern citation can be deconstructed from the perspective of the cultural evolution of scientific communication. <s> BIB001 </s> A Review of Theory and Practice in Scientometrics <s> Indicators of Impact: Citations <s> Because of the widespread use of citations in evaluation, we tend to think of them primarily as a form of colleague recognition. This interpretation neglects rhetorical factors that shape patterns of citations. After reviewing sociological theories of citation, this paper argues that we should think of citations first as rhetoric and second as reward. Some implications of this view for quantitative modeling of the citation process are drawn. <s> BIB002 </s> A Review of Theory and Practice in Scientometrics <s> Indicators of Impact: Citations <s> This chapter contains sections titled: Introduction, Proliferation of Performance Indicators, Strategic Behavior, Ambivalent Attitudes, The Citation as Institution, The Citation as Infrastructure, References <s> BIB003
|
We should begin by noting that the whole idea of the citation being a fundamental indicator of impact, let alone quality, is itself the subject of considerable debate. This concerns: the reasons for citing others' work, lists 15, or not citing it; the meaning or interpretation to be given to citations BIB002 BIB001 ; their place within scientific culture BIB003 ; and the practical problems and biases of citation analysis . This wider context will be discussed later; this section will concentrate on the technical aspects of citation metrics. The basic unit of analysis is a collection of papers (or more generally research outputs including books reports etc. but as pointed out in Section 2 the main databases only cover journal papers) and the number of citations they have received over a certain period of time. There are three possible situations: a fixed collection observed over a fixed period of time (e.g., computing JIFs); a fixed collection over an extending period of time (e.g., computing JIFs over different time windows); or a collection that is developing over time (e.g., observing the dynamic behaviour of citations over time ).
|
A Review of Theory and Practice in Scientometrics <s> Citation patterns <s> In an exploratory study, the time behaviour of citations to articles of seven journals representing different scientific fields (sociology, psychology, chemistry, general and inter nal medicine, statistics and probability theory) were analysed to establish: (i) differences in ageing and reception speed between social sciences and other science fields, to determine (ii) if there are connections between ageing and reception, and (iii) if deviations are due to fields or individ ual journals. Bibliometric methods and citation-based indi cators were used within a stochastic model. It was found that obsolescence of the social science journals in the set is slower than for the medical and chemistry journals. The behaviour of the mathematical journal is similar to the ones in social sciences. The study suggests that ageing seems to be specific to the field rather than to the individual journal. On the other hand, slow ageing does not necessarily corre spond with slow response. Impact factors based on the usual tw... <s> BIB001 </s> A Review of Theory and Practice in Scientometrics <s> Citation patterns <s> The paper identifies 29 models that the literature suggests are appropriate for technological forecasting. These models are divided into three classes according to the timing of the point of inflexion in the innovation or substitution process. Faced with a given data set and such a choice, the issue of model selection needs to be addressed. Evidence used to aid model selection is drawn from measures of model fit and model stability. An analysis of the forecasting performance of these models using simulated data sets shows that it is easier to identify a class of possible models rather than the "best" model. This leads to the combining of model forecasts. The performance of the combined forecasts appears promising with a tendency to outperform the individual component models. <s> BIB002 </s> A Review of Theory and Practice in Scientometrics <s> Citation patterns <s> In this article we further develop the theory for a stochastic model for the citation process in the presence of obsolescence to predict the future citation pattern of individual papers in a collection. More precisely, we investigate the conditional distribution--and its mean-- of the number of citations to a paper after time t, given the number of citations it has received up to time t. In an important parametric case it is shown that the expected number of future citations is a linear function of the current number, this being interpretable as an example of a success-breeds-success phenomenon. <s> BIB003 </s> A Review of Theory and Practice in Scientometrics <s> Citation patterns <s> AbstractThe first-citation distribution, i.e. the cumulative distribution of the time period between publication of an article and the time it receives its first citation, has never been modelled by using well-known informetric distributions. An attempt to this is given in this paper. For the diachronous aging distribution we use a simple decreasing exponential model. For the distribution of the total number of received citations we use a classical Lotka function. The combination of these two tools yield new first-citation distributions.The model is then tested by applying nonlinear regression techniques. The obtained fits are very good and comparable with older experimental results of Rousseau and of Gupta and Rousseau. However our single model is capable of fitting all first-citation graphs, concave as well as S-shaped; in the older results one needed two different models for it.Our model is the function ::: $$\Phi {\text{(t}}_{\text{1}} {\text{) = }}\gamma (1 - a^{{\text{t}}_{\text{1}} } )^{\alpha - 1} {\text{ }}.$$ ::: ::: Here γ is the fraction of the papers that eventually get cited, t1 is the time of the first citation, a is the aging rate and α is Lotka's exponent. The combination of a and α in one formula is, to the best of our knowledge, new. The model hence provides estimates for these two important parameters. <s> BIB004 </s> A Review of Theory and Practice in Scientometrics <s> Citation patterns <s> The stochastic model proposed recently by the author to describe the citation process in the presence of obsolescence is further investigated to illustrate the nth-citation distribution and the distribution of the total number of citations. The particular case where the latent rate has a gamma distribution is analysed in detail and is shown to be able to agree well with empirical data. <s> BIB005 </s> A Review of Theory and Practice in Scientometrics <s> Citation patterns <s> In a bibliometric study of nine research departments in the field of biotechnology and molecular biology, indicators of research capacity, output and productivity were calculated, taking into account the researchers' participation in scientific collaboration as expressed in co-publications. In a quantitative approach, rankings of departments based on a number of different research performance indicators were compared with one another. The results were discussed with members from all nine departments involved. Two publication strategies were identified, denoted as a quantity of publication and a quality of publication strategy, and two strategies with respect to scientific collaboration were outlined, one focusing on multi-lateral and a second on bi-lateral collaborations. Our findings suggest that rankings of departments may be influenced by specific publication and management strategies, which in turn may depend upon the phase of development of the departments or their personnel structure. As a consequence, differences in rankings cannot be interpreted merely in terms of quality or significance of research. It is suggested that the problem of assigning papers resulting from multi-lateral collaboration to the contributing research groups has not yet been solved properly, and that more research is needed into the influence of a department's state of development and personnel structure upon the values of bibliometric indicators. A possible implication at the science policy level is that different requirements should hold for departments of different age or personnel structure. <s> BIB006 </s> A Review of Theory and Practice in Scientometrics <s> Citation patterns <s> We reply to the criticism of Opthof and Leydesdorff on the way in which our institute applies journal and field normalizations to citation counts. We point out why we believe most of the criticism is unjustified, but we also indicate where we think Opthof and Leydesdorff raise a valid point. <s> BIB007 </s> A Review of Theory and Practice in Scientometrics <s> Citation patterns <s> The Impact Factors (IFs) of the Institute for Scientific Information suffer from a number of drawbacks, among them the statistics—Why should one use the mean and not the median?—and the incomparability among fields of science because of systematic differences in citation behavior among fields. Can these drawbacks be counteracted by fractionally counting citation weights instead of using whole numbers in the numerators? (a) Fractional citation counts are normalized in terms of the citing sources and thus would take into account differences in citation behavior among fields of science. (b) Differences in the resulting distributions can be tested statistically for their significance at different levels of aggregation. (c) Fractional counting can be generalized to any document set including journals or groups of journals, and thus the significance of differences among both small and large sets can be tested. A list of fractionally counted IFs for 2008 is available online at The between-group variance among the 13 fields of science identified in the U.S. Science and Engineering Indicators is no longer statistically significant after this normalization. Although citation behavior differs largely between disciplines, the reflection of these differences in fractionally counted citation distributions can not be used as a reliable instrument for the classification. © 2011 Wiley Periodicals, Inc. <s> BIB008 </s> A Review of Theory and Practice in Scientometrics <s> Citation patterns <s> The findings of Bornmann, Leydesdorff, and Wang (2013b) revealed that the consideration of journal impact improves the prediction of long-term citation impact. This paper further explores the possibility of improving citation impact measurements on the base of a short citation window by the consideration of journal impact and other variables, such as the number of authors, the number of cited references, and the number of pages. The dataset contains 475,391 journal papers published in 1980 and indexed in Web of Science (WoS, Thomson Reuters), and all annual citation counts (from 1980 to 2010) for these papers. As an indicator of citation impact, we used percentiles of citations calculated using the approach of Hazen (1914). Our results show that citation impact measurement can really be improved: If factors generally influencing citation impact are considered in the statistical analysis, the explained variance in the long-term citation impact can be much increased. However, this increase is only visible when using the years shortly after publication but not when using later years. <s> BIB009 </s> A Review of Theory and Practice in Scientometrics <s> Citation patterns <s> Group-based trajectory modeling GBTM is applied to the citation curves of articles in six journals and to all citable items in a single field of science virology, 24 journals to distinguish among the developmental trajectories in subpopulations. Can citation patterns of highly-cited papers be distinguished in an early phase as "fast-breaking" papers? Can "late bloomers" or "sleeping beauties" be identified? Most interesting, we find differences between "sticky knowledge claims" that continue to be cited more than 10 years after publication and "transient knowledge claims" that show a decay pattern after reaching a peak within a few years. Only papers following the trajectory of a "sticky knowledge claim" can be expected to have a sustained impact. These findings raise questions about indicators of "excellence" that use aggregated citation rates after 2 or 3 years e.g., impact factors. Because aggregated citation curves can also be composites of the two patterns, fifth-order polynomials with four bending points are needed to capture citation curves precisely. For the journals under study, the most frequently cited groups were furthermore much smaller than 10%. Although GBTM has proved a useful method for investigating differences among citation trajectories, the methodology does not allow us to define a percentage of highly cited papers inductively across different fields and journals. Using multinomial logistic regression, we conclude that predictor variables such as journal names, number of authors, etc., do not affect the stickiness of knowledge claims in terms of citations but only the levels of aggregated citations which are field-specific. <s> BIB010 </s> A Review of Theory and Practice in Scientometrics <s> Citation patterns <s> Current metrics for estimating a scientist's academic performance treat the author's publications as if these were solely attributable to the author. However, this approach ignores the substantive contributions of co-authors, leading to misjudgments about the individual's own scientific merits and consequently to misallocation of funding resources and academic positions. This problem is becoming the more urgent in the biomedical field where the number of collaborations is growing rapidly, making it increasingly harder to support the best scientists. Therefore, here we introduce a simple harmonic weighing algorithm for correcting citations and citation-based metrics such as the h-index for co-authorships. This weighing algorithm can account for both the nvumber of co-authors and the sequence of authors on a paper. We then derive a measure called the 'profit (p)-index', which estimates the contribution of co-authors to the work of a given author. By using samples of researchers from a renowned Dutch University hospital, Spinoza Prize laureates (the most prestigious Dutch science award), and Nobel Prize laureates in Physiology or Medicine, we show that the contribution of co-authors to the work of a particular author is generally substantial (i.e., about 80%) and that researchers' relative rankings change materially when adjusted for the contributions of co-authors. Interestingly, although the top University hospital researchers had the highest h-indices, this appeared to be due to their significantly higher p-indices. Importantly, the ranking completely reversed when using the profit adjusted h-indices, with the Nobel laureates having the highest, the Spinoza Prize laureates having an intermediate, and the top University hospital researchers having the lowest profit adjusted h-indices, respectively, suggesting that exceptional researchers are characterized by a relatively high degree of scientific independency/originality. The concepts and methods introduced here may thus provide a more fair impression of a scientist's autonomous academic performance. <s> BIB011
|
If we look at the number of citations per year received by a paper over time it shows a typical birthdeath process. Initially there are few citations; then the number increases to a maximum; finally they die away as the content becomes obsolete. Note that the total number of citations can only increase over time but the rate of increase of citations can decrease as obsolescence sets in. There are many variants to this basic pattern, for example "shooting stars" that are highly cited but die quickly, and "sleeping beauties" that are ahead of their time BIB007 ). There are also significantly different patterns of citation behaviour between disciplines that will be discussed in the normalization section. There are several statistical models of this process. BIB001 use a linear birth process; BIB004 assumed citations were exponential and deterministic. Perhaps the most usual is to conceptualise the process as basically random from year to year but with some underlying mean (λ) and use the Poisson distribution. There can then be two extensions -the move from a single paper to a collection of papers with differing mean rates , and the incorporation of obsolescence in the rate of citations BIB005 BIB003 . If we assume a Gamma distribution for the variability of the parameter λ, then the result is a negative binomial of the form: 2 where v and α are parameters to be determined empirically. The negative binomial is a highly skewed distribution which, as we have seen, is generally the case with bibliometric data. The issue of zero cites is of concern. On the one hand, that a paper has never been cited does not imply that it is of zero quality, especially when it has been through rigorous reviewing processes in a top journal, which is evidence that citations are not synonymous with quality. On the other hand, as argues, a paper that has never been cited must at the least be disconnected from the field in question. The mean cites per paper (over 15 years) vary considerably between journals from 7.2 to 38.6 showing the major differences between journals (to be covered in a later section), although it is difficult to disentangle whether this is because of the intrinsically better quality of the papers or simply the reputation of the journal. BIB009 found that the journal can be considered as a significant co-variate in the prediction of citation impact. Obsolescence can be incorporated into the model by including a time-based function in the distribution. This would generally be an S-shaped curve that would alter the value of λ over time, but there are many possibilities BIB002 and the empirical results did not identify any particular one although the gamma and the Weibull distributions provided the best fits. It is also possible to statistically predict how many additional citations will be generated if a particular number have been received so far. The main results are that, at time t, the future citations are a linear function of the citations received so far, and the slope of the increment line decreases over the lifetime of the papers. These results applied to collections of papers, but do not seem to apply to the dynamics of individual papers. In a further study of the same data set, the citation patterns of the individual papers were modelled . The main conclusions were twofold: i) that individual papers were highly variable and it was almost impossible to predict the final number of citations based on the number in the early years, in fact up to about year ten. This was partly because of sleeping beauty and shooting star effects. ii) The time period for papers to mature was quite long -the maximum citations were not reached until years eight and nine, and many papers were still being strongly cited at the end of 14 years. This is very different from the natural sciences where the pace of citations is very much quicker for most papers BIB010 . If we wish to use citations as a basis for comparative evaluation, whether of researchers, journals or departments, we must consider influences on citations other than pure impact or quality. The first, and most obvious, is simply the number of papers generating a particular total of citations. A journal or department publishing 100 papers per year would expect more citations than one publishing 20. For this reason the main comparative indicator that has been used traditionally is the mean cites per paper (CPP) or raw impact per paper (RIP) during the time period of study. This was the basis of the Leiden (CWTS) "crown indicator" measure for evaluating research units suitably normalised against other factors. . We should note that this is the opposite of total citations -it pays no attention at all to the number of papers, so a researcher with a CPP of 20 could have one paper, or one hundred papers each with 20 citations. These other factors include: the general disciplinary area -natural science, social science or humanities; particular fields such as biomedicine (high) or mathematics (low); the type of paper (reviews are high); the degree of generality of the paper (i.e., of interest to a large or small audience); reputational effects such as the journal, the author, or the institution; the language; the region or country (generally the US has the highest number of researchers and therefore citations) as well as the actual content and quality of the paper. Another interesting issue is whether all citations should be worth the same? There are three distinct factors here -the number of authors of a paper, the number of references in the citing paper, and the quality of the citing journal. In terms of numbers of authors, the sciences generally have many collaborators within an experimental or laboratory setting who all get credited. Comparing this with the situation of a single author who has done all the work themselves, should not the citations coming to that paper be spread among the authors? The extreme example mentioned above concerning the single paper announcing the Higgs Boson actually had a significant effect on the position of several universities in the 2014 Times Higher World University Ranking . The paper, with 2896 "authors" affiliated to 228 institutions, had received 1631 citations within a year. All of the institutions received full credit for this and for some, who only had a relatively small number of papers, it made a huge difference BIB011 BIB006 . The number of references in the citing paper can be a form of normalisation (fractional counting of citations) BIB008 which will be discussed below. Taking into account the quality of the citing journal gives rise to new indicators that will be discussed in the section on journals.
|
A Review of Theory and Practice in Scientometrics <s> The h-index <s> I propose the index $h$, defined as the number of papers with citation number higher or equal to $h$, as a useful index to characterize the scientific output of a researcher. <s> BIB001 </s> A Review of Theory and Practice in Scientometrics <s> The h-index <s> Hirsch (2005) has proposed the h-index as a single-number criterion to evaluate the scientific output of a researcher (Ball, 2005): A scientist has index h if h of his/her Np papers have at least h citations each, and the other (Np − h) papers have fewer than h citations each. In a study on committee peer review (Bornmann & Daniel, 2005) we found that on average the h-index for successful applicants for post-doctoral research fellowships was consistently higher than for non-successful applicants. <s> BIB002 </s> A Review of Theory and Practice in Scientometrics <s> The h-index <s> L'A. analyse les proprietes basiques de l'index-h, indicateur developpe par J. E. Hirsch, sur la base d'un modele de distribution de probabilites largement utilise en bibliometrie, a savoir les distributions Pareto. L'index-h, fonde sur le nombre de citations recues, mesure l'activite de publication et l'impact en citations. C'est un indicateur utile avec d'interessantes proprietes mathematiques, mais qui ne saurait se substituer aux indicateurs bibliometriques courants plus sophistiques. <s> BIB003 </s> A Review of Theory and Practice in Scientometrics <s> The h-index <s> The calculation of Hirsch's h-index is a detail-ignoring way, therefore, single h-index could not reflect the difference of time spans for scientists to accumulate their papers and citations. In this study the h-index sequence and the h-index matrix are constructed, which complement the absent details of single h-index, reveal different increasing manner and the increasing mechanism of the h-index, and make the scientists at different scientific age comparable. <s> BIB004 </s> A Review of Theory and Practice in Scientometrics <s> The h-index <s> Hirsch’s h-index gives a single number that in some sense summarizes an author’s research output and its impact. Since an individual author’s h-index will be time-dependent, we propose instead the h-rate which, according to theory, is (almost) constant. We re-analyse a previously published data set (Liang, 2006) which, although not of the precise form to properly test our model, reveals that in many cases we do not have a constant h-rate. On the other hand this then suggests ways in which deeper scientometric investigations could be carried out. This work should be viewed as complementary to that of Liang (2006). <s> BIB005 </s> A Review of Theory and Practice in Scientometrics <s> The h-index <s> The number h of papers with at least h citations has been proposed to evaluate individuals scientific research production. This index is robust in several ways but yet strongly dependent on the research field. We propose a complementary index ) ( 2 T a I N h h = , with ) (T a N being the total number of authors in the considered h papers. A researcher with index hI has hI papers with at least hI citation if he/she had published alone. We have obtained the rank plots of h and h I for four Brazilian scientific communities. In contrast with the h-index, the h I index rank plots collapse into a single curve allowing comparison among different research areas. <s> BIB006 </s> A Review of Theory and Practice in Scientometrics <s> The h-index <s> Are some ways of measuring scientific quality better than others? Sune Lehmann, Andrew D. Jackson and Benny E. Lautrup analyse the reliability of commonly used methods for comparing citation records. Citation analysis can loom large in a scientist's career. In this issue Sune Lehmann, Andrew Jackson and Benny Lautrup compare commonly used measures of author quality. The mean number of citations per paper emerges as a better indicator than the more complex Hirsch index; a third method, the number of papers published per year, measures industry rather than ability. Careful citation analyses are useful, but Lehmann et al. caution that institutions often place too much faith in decisions reached by algorithm, use poor methodology or rely on inferior data sets. <s> BIB007 </s> A Review of Theory and Practice in Scientometrics <s> The h-index <s> Using both author-level and journal-level data, Hirsch's h-index is shown to possess substantial heuristic value in that it yields accurate results whilst requiring minimal informational acquisition effort. As expected, the h-index of productive consumer scholars correlated strongly with their total citation counts. Furthermore, the h-indices as obtained via ISI/Thompson and GoogleScholar were highly correlated albeit the latter yielded higher values. Finally, using a database of business-relevant journals, a significant correlation was found between the journals' h-indices and their citation impact scores. <s> BIB008 </s> A Review of Theory and Practice in Scientometrics <s> The h-index <s> The relationship of the h-index with other bibliometric indicators at the micro level is analysed for Spanish CSIC scientists in Natural Resources, using publications downloaded from the Web of Science (1994–2004). Different activity and impact indicators were obtained to describe the research performance of scientists in different dimensions, being the h-index located through factor analysis in a quantitative dimension highly correlated with the absolute number of publications and citations. The need to include the remaining dimensions in the analysis of research performance of scientists and the risks of relying only on the h-index are stressed. The hypothesis that the achievement of some highly visible but intermediate-productive authors might be underestimated when compared with other scientists by means of the h-index is tested. <s> BIB009 </s> A Review of Theory and Practice in Scientometrics <s> The h-index <s> Bibliometric measures of individual scientific achievement are of particular interest if they can be used to predict future achievement. Here we report results of an empirical study of the predictive power of the h index compared with other indicators. Our findings indicate that the h index is better than other indicators considered (total citation count, citations per paper, and total paper count) in predicting future scientific achievement. We discuss reasons for the superiority of the h index. <s> BIB010 </s> A Review of Theory and Practice in Scientometrics <s> The h-index <s> The recently developed h-index has been applied to the literature produced by senior British-based academics in librarianship and information science. The majority of those evaluated currently hold senior positions in UK information science and librarianship departments; however, a small number of staff in other departments and retired "founding fathers" were analyzed as well. The analysis was carried out using the Web of Science (Thomson Scientific, Philadelphia, PA) for the years from 1992 to October 2005, and included both second-authored papers and self-citations. The top-ranking British information scientist, Peter Willett, has an h-index of 31. However, it was found that Eugene Garfield, the founder of modern citation studies, has an even higher h-index of 36. These results support other studies suggesting that the h-index is a useful tool in the armory of bibliometrics. <s> BIB011 </s> A Review of Theory and Practice in Scientometrics <s> The h-index <s> There is an increasing emphasis on the use of metrics for assessing the research contribution of academics, departments, journals or conferences. Contribution has two dimensions: quantity which can be measured by number/size of the outputs, and quality which is most easily measured by the number of citations. Recently, Hirsch proposed a new metric which is simple, combines both quality and quantity in one number, and is robust to measurement problems. This paper applies the Hirsch-index (h-index) to three groups of management academics—BAM Fellows, INFORMS Fellows and members of COPIOR—in order to evaluate the extent to which the h-index would serve as a reliable measure of the contribution of researchers in the management field. <s> BIB012 </s> A Review of Theory and Practice in Scientometrics <s> The h-index <s> The h-index and some related bibliometric indices have received a lot of attention from the scientific community in the last few years due to some of their good properties (easiness of computation, balance between quantity of publications and their impact and so on). Many different indicators have been developed in order to extend and overcome the drawbacks of the original Hirsch proposal. In this contribution we present a comprehensive review on the h-index and related indicators field. From the initial h-index proposal we study their main advantages, drawbacks and the main applications that we can find in the literature. A description of many of the h-related indices that have been developed along with their main characteristics and some of the works that analyze and compare them are presented. We also review the most up to date standardization studies that allow a fair comparison by means of the h-index among scientists from different research areas and finally, some works that analyze the computation of the h-index and related indices by using different citation databases (ISI Citation Indexes, Google Scholar and Scopus) are introduced. <s> BIB013 </s> A Review of Theory and Practice in Scientometrics <s> The h-index <s> BACKGROUND ::: The h-index has already been used by major citation databases to evaluate the academic performance of individual scientists. Although effective and simple, the h-index suffers from some drawbacks that limit its use in accurately and fairly comparing the scientific output of different researchers. These drawbacks include information loss and low resolution: the former refers to the fact that in addition to h(2) citations for papers in the h-core, excess citations are completely ignored, whereas the latter means that it is common for a group of researchers to have an identical h-index. ::: ::: ::: METHODOLOGY/PRINCIPAL FINDINGS ::: To solve these problems, I here propose the e-index, where e(2) represents the ignored excess citations, in addition to the h(2) citations for h-core papers. Citation information can be completely depicted by using the h-index together with the e-index, which are independent of each other. Some other h-type indices, such as a and R, are h-dependent, have information redundancy with h, and therefore, when used together with h, mask the real differences in excess citations of different researchers. ::: ::: ::: CONCLUSIONS/SIGNIFICANCE ::: Although simple, the e-index is a necessary h-index complement, especially for evaluating highly cited scientists or for precisely comparing the scientific output of a group of scientists having an identical h-index. <s> BIB014 </s> A Review of Theory and Practice in Scientometrics <s> The h-index <s> This study is part of a program aimed at creating measures enabling a fairer and more complete assessment of a scholar’s contribution to a field, thus bringing greater rationality and transparency to the promotion and tenure process. It finds current approaches toward the evaluation of research productivity to be simplistic, atheoretic, and biased toward reinforcing existing reputation and power structures. This study examines the use of the Hirsch family of indices, a robust and theoretically informed metric, as an addition to prior approaches to assessing the scholarly influence of IS researchers. It finds that while the top tier journals are important indications of a scholar’s impact, they are neither the only nor, indeed, the most important sources of scholarly influence. Other ranking studies, by narrowly bounding the venues included in those studies, distort the discourse and effectively privilege certain venues by declaring them to be more highly influential than warranted. The study identifies three different categories of scholars: those who publish primarily in North American journals, those who publish primarily in European journals, and a transnational set of authors who publish in both geographies. Excluding the transnational scholars, for the scholars who published in these journal sets during the period of this analysis, we find that North American scholars tend to be more influential than European scholars, on average. We attribute this difference to a difference in the publication culture of the different geographies. This study also suggests that the influence of authors who publish in the European journal set is concentrated at a moderate level of influence, while the influence of those who publish in the North American journal set is dispersed between those with high influence and those with relatively low influence. Therefore, to be a part of the top European scholar list requires a higher level of influence than to be a part of the top North American scholar list. <s> BIB015 </s> A Review of Theory and Practice in Scientometrics <s> The h-index <s> The h-index is a relatively recent bibliometric indicator for assessing the research output of scientists, based on the publications and the corresponding citations. Due to the original characteristics of easy calculation and immediate intuitive meaning, this indicator has become very popular in the scientific community. Also, it received some criticism essentially because of its "low" accuracy. The contribution of this paper is to provide a detailed analysis of the h-index, from the point of view of the indicator operational properties. This work can be helpful to better understand the peculiarities and limits of h and avoid its misuse. Finally, we suggest an additional indicator (f) that complements h with the information related to the publication age, not compromising the original simplicity and immediacy of understanding. <s> BIB016 </s> A Review of Theory and Practice in Scientometrics <s> The h-index <s> This paper provides a ranking of 69 marketing journals using a new Hirsch-type index, the hg-index which is the geometric mean of hg. The applicability of this index is tested on data retrieved from Google Scholar on marketing journal articles published between 2003 and 2007. The authors investigate the relationship between the hg-ranking, ranking implied by Thomson Reuters’ Journal Impact Factor for 2008, and rankings in previous citation-based studies of marketing journals. They also test two models of consumption of marketing journals that take into account measures of citing (based on the hg-index), prestige, and reading preference. <s> BIB017 </s> A Review of Theory and Practice in Scientometrics <s> The h-index <s> Hirsch's h-index cannot be used to compare academics that work in different disciplines or are at different career stages. Therefore, a metric that corrects for these differences would provide information that the h-index and its many current refinements cannot deliver. This article introduces such a metric, namely the hI,annual (or hIa for short). The hIa-index represents the average annual increase in the individual h-index. Using a sample of 146 academics working in five major disciplines and representing a wide variety of career lengths, we demonstrate that this metric attenuates h-index differences attributable to disciplinary background and career length. It is also easy to calculate with readily available data from all major bibliometric databases, such as Thomson Reuters Web of Knowledge, Scopus and Google Scholar. Finally, as the metric represents the average number of single-author-equivalent "impactful" articles that an academic has published per year, it also allows an intuitive interpretation. Although just like any other metric, the hIa-index should never be used as the sole criterion to evaluate academics, we argue that it provides a more reliable comparison between academics than currently available metrics. <s> BIB018
|
We have seen that the total number of citations, as a metric, is strongly affected by the number of papers but does not provide any information on this. At the opposite extreme, the CPP is totally insensitive to productivity. In 2005, a new metric was proposed by BIB001 that combined in a single, easy to understand, number both impact (citations) and productivity (papers). The h-index has been hugely influential since then, generating an entire literature of its own. Currently his paper has well over 4000 citations in GS. In this section we will only be able to summarise the main advantages and disadvantages, for more detailed reviews see BIB013 BIB002 BIB009 BIB003 and for mathematical properties see BIB003 and BIB016 . The h index is defined as: "a scientist has index h if h of his or her N p papers have at least h citations each and the other (N p -h) papers have <= h citations each" (p. 16569). So h represents the top h papers, all of which have at least h citations. This one number thus combines both number of citations and number of papers. These h papers are generally called the h-core. The hcore is not uniquely defined where more than one paper has h citations. The h-index ignores all the other papers below h, and it also ignores the actual number of citations received above h. The advantages are: It combines both productivity and impact in a single measure that is easily understood and very intuitive. It is easily calculated just knowing the number of citations either from WoS, Scopus or Google Scholar. Indeed, all three now routinely calculate it. It can be applied at different levels -researcher, journal or department. It is objective and a good comparator within a discipline where citation rates are similar. It is robust to poor data since it ignores the lower down papers where the problems usually occur. This is particularly important if using GS. However, many limitations have been identified including some that affect all citation based measures (e.g., the problem of different scientific areas, and ensuring correctness of data), and a range of modifications have been suggested . The first is that the metric is insensitive to the actual numbers of citations received by the papers in the h-core. Thus two researchers (or journals) with the same h-index could have dramatically different actual numbers of citations. has suggested the g-index as a way of compensating for this. "A set of papers has a g-index of g if g is the highest rank such that the top g papers have, together, at least g 2 citations" (p. 132). The fundamental idea is that the h-core papers must have at least h 2 citations between them although in practice they may have many more. At first sight, the use of the square rather than the cube or any other power seems arbitrary but it is a nice choice since the definition can be re-written so that "the top g papers have an average number of citations at least g", which is much more intuitively appealing. g is at least as large as h. The more they have, the larger g will become and so it will to some extent reflect the total number of citations. The disadvantage of this metric is that it is less intuitively obvious than the h-index. Another alternative is the e-index proposed by BIB014 . There are several other proposals that measure statistics on the papers in the h-core, for example: o The a-index Rousseau, 2006) which is the mean number of citations of the papers in the h-core. o The m-index which is the median of the papers in the h-core since the data is always highly skewed. Currently Google Scholar Metrics 13 implements a 5-year h-index and 5-year m-index. o The r-index which is the square root of the sum of the citations of the h-core papers. This is because the a-index actually penalises better researchers as the number of citations are divided by h, which will be bigger for better scientists. A further development is the ar-index which is a variant of the r-index also taking into account the age of the papers. The h-index is strictly increasing and strongly related to the time the publications have existed. This biases it against young researchers. It also continues increasing even after a researcher has retired. Data on this is available from BIB004 who investigated the actual sequence of h values over time for the top scientists included in Hirsch's sample. A proposed way round this is to consider the h-rate BIB005 , that is the h-index at time t divided by the years since the researcher's first publication. This was also proposed by Hirsch as the m parameter in his original paper. Values of 2 or 3 indicate scientists who are both highly productive and well cited. The h-index does not discriminate well since it only employs integer values. Given that most researchers may well have h-indexes between 10 and 30, many will share the same value. Guns and Rousseau (2009) have investigated real and rational variants of both g and h. As with all citation-based indicators, they need to be normalised in some way to citation rates of the field. Iglesias and Pecharroman (2007) collected, from WoS, the mean citations per paper in each year from 1995-2005 for 21 different scientific fields The totals ranged from under 2.5 for computer science and mathematics to over 24 for molecular biology. From this data they constructed a table of normalisation factors to be applied to the h-index depending on the field and also the total number of papers published by the researcher. A similar issue concerns the number of authors. The sciences tend to have more authors per paper than the social sciences and humanities and this generates more papers and more citations. BIB006 developed the hI-index as the h-index divided by the mean number of authors of the h-core papers. They also claim that this accounts to some extent for the citation differences between disciplines. Publish or Perish also corrects for authors by dividing the citations for each paper by the number of authors before calculating the hI,norm-index. This metric has been further normalised to take into account the career length of the author BIB018 . The h-index is dependent on or limited by the total number of publications and this is a disadvantage for researchers who are highly cited but for a small number of publications BIB009 . For example, Aguillo 14 has compiled a list of the most highly cited researchers in GS according to the h-index (382 with h's of 100 or more). A notable absentee is Thomas Kuhn, one of the most influential researchers of the last 50 years with his concept of a scientific paradigm. His book BIB010 . This was in contrast to other studies such as BIB007 . Generally, such comparisons show that the h-index is highly correlated with other bibliometric indicators, but more so with measures of productivity such as number of papers and total number of citations, rather than with citations per paper which is more a measure of pure impact BIB013 BIB009 ). There have been several studies of the use of the h-index in business and management fields such as information systems BIB011 BIB015 , management science BIB012 , consumer research BIB008 , Marketing BIB017 and business ). Overall, the h-index may be somewhat crude in compressing information about a researcher into a single number, and it should always be used for evaluation purposes in combination with other measures or peer judgement but it has clearly become well-established in practice being available in all the citation databases. Another approach is the use of percentile measures which we will cover in the next section.
|
A Review of Theory and Practice in Scientometrics <s> Normalisation Methods <s> The number of citations is becoming an increasingly popular index for measuring the impact of a scholar's research or the quality of an academic department. One obvious question is: what are the factors that influence the number of citations that a paper receives? This study investigates the number of citations received by papers published in six well-known management science journals. It considers factors that relate to the author(s), the article itself, and the journal. The results show that the strongest factor is the journal itself; but other factors are also significant including the length of the paper, the number of references, the status of the first author's institution, and the type of paper, especially if it is a review. Overall, this study provides some insights into the determinants of a paper's impact that may be helpful for particular stakeholders to make important decisions. <s> BIB001 </s> A Review of Theory and Practice in Scientometrics <s> Normalisation Methods <s> A discipline such as business and management (BM second, for the purpose of normalising citation data as it is well-known that citation rates vary significantly between different disciplines. And third, because journal rankings and lists tend to split their classifications into different subjects—for example, the Association of Business Schools list, which is a standard in the UK, has 22 different fields. Unfortunately, at the moment these are created in an ad-hoc manner with no underlying rigour. The purpose of this paper is to identify possible sub-fields in B&M rigorously based on actual citation patterns. We have examined 450 journals in B&M, which are included in the ISI Web of Science and analysed the cross-citation rates between them enabling us to generate sets of coherent and consistent sub-fields that minimise the extent to which journals appear in several categories. Implications and limitations of the analysis are discussed. <s> BIB002 </s> A Review of Theory and Practice in Scientometrics <s> Normalisation Methods <s> We address the question how citation-based bibliometric indicators can best be normalized to ensure fair comparisons between publications from different scientific fields and different years. In a systematic large-scale empirical analysis, we compare a traditional normalization approach based on a field classification system with three source normalization approaches. We pay special attention to the selection of the publications included in the analysis. Publications in national scientific journals, popular scientific magazines, and trade magazines are not included. Unlike earlier studies, we use algorithmically constructed classification systems to evaluate the different normalization approaches. Our analysis shows that a source normalization approach based on the recently introduced idea of fractional citation counting does not perform well. Two other source normalization approaches generally outperform the classification-system-based normalization approach that we study. Our analysis therefore offers considerable support for the use of source-normalized bibliometric indicators. <s> BIB003
|
In considering the factors that affect the number of citations that papers receive, there are many to do with the individual paper -content, type of paper, quality, author, or institution BIB001 -but underlying those there are clear disciplinary differences that are hugely significant. As mentioned above, Iglesias and Pecharroman (2007) found that mean citation rates in molecular biology were ten times those in computer science. The problem is not just between disciplines but also within disciplines such as business and management which encompass different types of research fields. BIB002 found that management and strategy papers averaged nearly four times as many citations as public administration. This means that comparisons between researchers, journals or institutions across fields will not be meaningful without some form of normalisation. It is also important to normalise for time period because the number of citations always increases over time BIB003 .
|
A Review of Theory and Practice in Scientometrics <s> Field Classification Normalisation <s> Researchers worldwide are increasingly being assessed by the citation rates of their papers. These rates have potential impact on academic promotions and funding decisions. Currently there are several different ways that citation rates are being calculated, with the state of the art indicator being the crown indicator. This indicator has flaws and improvements could be considered. An item oriented field normalized citation score average (c¯f) is an incremental improvement as it differs from the crown indicator in so much as normalization takes place on the level of individual publication (or item) rather than on aggregated levels, and therefore assigns equal weight to each publication. The normalization on item level also makes it possible to calculate the second suggested indicator: total field normalized citation score (Σcf). A more radical improvement (or complement) is suggested in the item oriented field normalized logarithm-based citation z-score average (c¯fz[ln] or citation z-score). This indicator assigns equal weight to each included publication and takes the citation rate variability of different fields into account as well as the skewed distribution of citations over publications. <s> BIB001 </s> A Review of Theory and Practice in Scientometrics <s> Field Classification Normalisation <s> Since the last two decennia, Wageningen UR Library has been involved in bibliometric analyses for the evaluation of scientific output of staff, chair groups and research institutes of Wageningen UR. In these advanced bibliometric analyses several indicator scores, such as the number of publications, number of citations and citation impacts, are calculated. For a fair comparison of scientific output from staff, chair groups or research institutes (that each work in a different scientific discipline with specific publication and citation habits) scores of the measured bibliometric indicators are normalized against average trend (or baseline) scores per research field. For the collection of scientific output that is subjected to the bibliometric analyses the repository Wageningen Yield (WaY) is used. This repository is filled from the research registration system Metis in which meta data for scientific output is registered by the secretaries of the research groups of Wageningen UR. By the application of a connection between the meta data of publications in WaY and citation scores in Thomson Reuters? Web of Science, custom-made analyses on the scientific output and citation impact of specific entities from Wageningen UR can be performed fast and efficiently. Moreover, a timely registration of new scientific output is stimulated (to ensure their inclusion in future bibliometric analyses) and the quality of meta data in WaY is checked by the library staff and by the research staff from the research entities under investigation, thus promoting communication between the library and customers <s> BIB002 </s> A Review of Theory and Practice in Scientometrics <s> Field Classification Normalisation <s> The article "Caveats for the journal and field normalizations in the CWTS (`Leiden') evaluations of research performance", published by Tobias Opthof and Loet Leydesdorff (arXiv:1002.2769) deals with a subject as important as the application of so called field normalized indicators of citation impact in the assessment of research performance of individual researchers and research groups. Field normalization aims to account for differences in citation practices across scientific-scholarly subject fields. As the primary author of the papers presenting the "Leiden" indicators and of many reports and articles reporting on the outcomes of assessments actually using these measures, I comment on the 3 main issues addressed in the paper by Opthof and Leydesdorff. <s> BIB003 </s> A Review of Theory and Practice in Scientometrics <s> Field Classification Normalisation <s> The Leiden Ranking 2011/2012 is a ranking of universities based on bibliometric indicators of publication output, citation impact, and scientific collaboration. The ranking includes 500 major universities from 41 different countries. This paper provides an extensive discussion of the Leiden Ranking 2011/2012. The ranking is compared with other global university rankings, in particular the Academic Ranking of World Universities (commonly known as the Shanghai Ranking) and the Times Higher Education World University Rankings. The comparison focuses on the methodological choices underlying the different rankings. Also, a detailed description is offered of the data collection methodology of the Leiden Ranking 2011/2012 and of the indicators used in the ranking. Various innovations in the Leiden Ranking 2011/2012 are presented. These innovations include (1) an indicator based on counting a university's highly cited publications, (2) indicators based on fractional rather than full counting of collaborative publications, (3) the possibility of excluding non-English language publications, and (4) the use of stability intervals. Finally, some comments are made on the interpretation of the ranking and a number of limitations of the ranking are pointed out. © 2012 Wiley Periodicals, Inc. <s> BIB004
|
The most well established methodology for evaluating research centres was developed by the Centre for Science and Technology Studies (CWTS) at Leiden University and is known as the crown indicator or Leiden Ranking Methodology (LRM) . Essentially, this method compares the number of citations received by the publications of a research unit over a particular time period with that which would be expected, on a world-wide basis across the appropriate field and for the appropriate publication date. In this way, it normalises the citation rates for the department to rates for its whole field. Typically, top departments may have citation rates that are three or four times the field average. Leiden also produces a ranking of world universities based on bibliometric methods that will be discussed elsewhere BIB004 . This is the traditional "crown indicator", but this approach to normalisation has been criticised BIB001 and an alternative has been used in several cases BIB002 ). This has generated considerable debate in the literature BIB003 . The main criticism concerns the order of calculation in the indicator but the use of a mean when citation distributions are highly skewed is also a concern. It is argued that, mathematically, it is wrong to sum the actual and expected numbers of citations separately and then divide them. Rather, the division should be performed first, for each paper, and then these ratios should be averaged. In the latter case you get a proper statistic rather than merely a quotient. It might be thought that this is purely a technical issue, but it it can affect the results significantly. In particular, the older CWTS method tends to weight more highly publications from fields with high citation numbers whereas the new one weights them equally. Also, the older method is not consistent in its ranking of institutions when both improve equally in terms of publications and citations. Eventually this was accepted by CWTS, and (from CTWS) have produced both theoretical and empirical comparisons of the two methods and concluded that the newer one is theoretically preferably but does not make much difference in practice. The new method is called the "mean normalised citation score" (MNCS). commented that the "alternative" method was not alternative but in fact the correct way to normalise, and had been in use elsewhere for fifteen years.
|
A Review of Theory and Practice in Scientometrics <s> Source Normalisation <s> Detecting homogeneous areas in research networks is a very common feature of bibliometric analysis, either for academic or policy purposes. The method presented here combines structural analysis and trend detection, by operating on a “thick-slice” of time, starting from co-citation or co-word analysis (applications of either type have already been carried on). Significance of “trend” of clusters is partially addressed, through an analysis of publication delays. Examples are given on a co-citation analysis in the field of astrophysics (1986–1989). <s> BIB001 </s> A Review of Theory and Practice in Scientometrics <s> Source Normalisation <s> This paper explores a new indicator of journal citation impact, denoted as source normalized impact per paper (SNIP). It measures a journal's contextual citation impact, taking into account characteristics of its properly defined subject field, especially the frequency at which authors cite other papers in their reference lists, the rapidity of maturing of citation impact, and the extent to which a database used for the assessment covers the field's literature. It further develops Eugene Garfield's notions of a field's ‘citation potential’ defined as the average length of references lists in a field and determining the probability of being cited, and the need in fair performance assessments to correct for differences between subject fields. A journal's subject field is defined as the set of papers citing that journal. SNIP is defined as the ratio of the journal's citation count per paper and the citation potential in its subject field. It aims to allow direct comparison of sources in different subject fields. Citation potential is shown to vary not only between journal subject categories – groupings of journals sharing a research field – or disciplines (e.g., journals in mathematics, engineering and social sciences tend to have lower values than titles in life sciences), but also between journals within the same subject category. For instance, basic journals tend to show higher citation potentials than applied or clinical journals, and journals covering emerging topics higher than periodicals in classical subjects or more general journals. SNIP corrects for such differences. Its strengths and limitations are critically discussed, and suggestions are made for further research. All empirical results are derived from Elsevier's Scopus. <s> BIB002 </s> A Review of Theory and Practice in Scientometrics <s> Source Normalisation <s> A new family of citation normalization methods appeared recently, in addition to the classical methods of "cited-side" normalization and the iterative measures of intellectual influence in the wake of Pinski and Narin influence weights. These methods have a quite global scope in citation analysis but were first applied to the journal impact, in the experimental Audience Factor (AF) and the Scopus Source-Normalized Impact per Paper (SNIP). Analyzing some properties of the Garfield's Journal Impact Factor, this note highlights the rationale of citing-side (or source-level, fractional citation, ex ante) normalization. <s> BIB003 </s> A Review of Theory and Practice in Scientometrics <s> Source Normalisation <s> Two paradigmatic approaches to the normalisation of citation-impact measures are discussed. The results of the mathematical manipulation of standard indicators such as citation means, notably journal Impact Factors, (called a posteriori normalisation) are compared with citation measures obtained from fractional citation counting (called a priori normalisation). The distributions of two subfields of the life sciences and mathematics are chosen for the analysis. It is shown that both methods provide indicators that are useful tools for the comparative assessment of journal citation impact. <s> BIB004 </s> A Review of Theory and Practice in Scientometrics <s> Source Normalisation <s> The Impact Factors (IFs) of the Institute for Scientific Information suffer from a number of drawbacks, among them the statistics—Why should one use the mean and not the median?—and the incomparability among fields of science because of systematic differences in citation behavior among fields. Can these drawbacks be counteracted by fractionally counting citation weights instead of using whole numbers in the numerators? (a) Fractional citation counts are normalized in terms of the citing sources and thus would take into account differences in citation behavior among fields of science. (b) Differences in the resulting distributions can be tested statistically for their significance at different levels of aggregation. (c) Fractional counting can be generalized to any document set including journals or groups of journals, and thus the significance of differences among both small and large sets can be tested. A list of fractionally counted IFs for 2008 is available online at The between-group variance among the 13 fields of science identified in the U.S. Science and Engineering Indicators is no longer statistically significant after this normalization. Although citation behavior differs largely between disciplines, the reflection of these differences in fractionally counted citation distributions can not be used as a reliable instrument for the classification. © 2011 Wiley Periodicals, Inc. <s> BIB005 </s> A Review of Theory and Practice in Scientometrics <s> Source Normalisation <s> Citation numbers are extensively used for assessing the quality of scientific research. The use of raw citation counts is generally misleading, especially when applied to cross-disciplinary comparisons, since the average number of citations received is strongly dependent on the scientific discipline of reference of the paper. Measuring and eliminating biases in citation patterns is crucial for a fair use of citation numbers. Several numerical indicators have been introduced with this aim, but so far a specific statistical test for estimating the fairness of these numerical indicators has not been developed. Here we present a statistical method aimed at estimating the effectiveness of numerical indicators in the suppression of citation biases. The method is simple to implement and can be easily generalized to various scenarios. As a practical example we test, in a controlled case, the fairness of fractional citation count, which has been recently proposed as a tool for cross-discipline comparison. We show that this indicator is not able to remove biases in citation patterns and performs much worse than the rescaling of citation counts with average values. <s> BIB006 </s> A Review of Theory and Practice in Scientometrics <s> Source Normalisation <s> A discipline such as business and management (BM second, for the purpose of normalising citation data as it is well-known that citation rates vary significantly between different disciplines. And third, because journal rankings and lists tend to split their classifications into different subjects—for example, the Association of Business Schools list, which is a standard in the UK, has 22 different fields. Unfortunately, at the moment these are created in an ad-hoc manner with no underlying rigour. The purpose of this paper is to identify possible sub-fields in B&M rigorously based on actual citation patterns. We have examined 450 journals in B&M, which are included in the ISI Web of Science and analysed the cross-citation rates between them enabling us to generate sets of coherent and consistent sub-fields that minimise the extent to which journals appear in several categories. Implications and limitations of the analysis are discussed. <s> BIB007 </s> A Review of Theory and Practice in Scientometrics <s> Source Normalisation <s> The SNIP (source normalized impact per paper) indicator is an indicator of the citation impact of scientific journals. The indicator, introduced by Henk Moed in 2010, is included in Elsevier's Scopus database. The SNIP indicator uses a source normalized approach to correct for differences in citation practices between scientific fields. The strength of this approach is that it does not require a field classification system in which the boundaries of fields are explicitly defined. In this paper, a number of modifications that will be made to the SNIP indicator are explained, and the advantages of the resulting revised SNIP indicator are pointed out. It is argued that the original SNIP indicator has some counterintuitive properties, and it is shown mathematically that the revised SNIP indicator does not have these properties. Empirically, the differences between the original SNIP indicator and the revised one turn out to be relatively small, although some systematic differences can be observed. Relations with other source normalized indicators proposed in the literature are discussed as well. <s> BIB008 </s> A Review of Theory and Practice in Scientometrics <s> Source Normalisation <s> Different scientific fields have different citation practices. Citation-based bibliometric indicators need to normalize for such differences between fields in order to allow for meaningful between-field comparisons of citation impact. Traditionally, normalization for field differences has usually been done based on a field classification system. In this approach, each publication belongs to one or more fields and the citation impact of a publication is calculated relative to the other publications in the same field. Recently, the idea of source normalization was introduced, which offers an alternative approach to normalize for field differences. In this approach, normalization is done by looking at the referencing behavior of citing publications or citing journals. In this paper, we provide an overview of a number of source normalization approaches and we empirically compare these approaches with a traditional normalization approach based on a field classification system. We also pay attention to the issue of the selection of the journals to be included in a normalization for field differences. Our analysis indicates a number of problems of the traditional classification-system-based normalization approach, suggesting that source normalization approaches may yield more accurate results. <s> BIB009 </s> A Review of Theory and Practice in Scientometrics <s> Source Normalisation <s> Two methods for comparing impact factors and citation rates across fields of science are tested against each other using citations to the 3,705 journals in the Science Citation Index 2010 (CD-Rom version of SCI) and the 13 field categories used for the Science and Engineering Indicators of the U.S. National Science Board. We compare (a) normalization by counting citations in proportion to the length of the reference list (1/N of references) with (b) rescaling by dividing citation scores by the arithmetic mean of the citation rate of the cluster. Rescaling is analytical and therefore independent of the quality of the attribution to the sets, whereas fractional counting provides an empirical strategy for normalization among sets (by evaluating the between-group variance). By the fairness test of Radicchi and Castellano (2012a), rescaling outperforms fractional counting of citations for reasons that we consider. <s> BIB010 </s> A Review of Theory and Practice in Scientometrics <s> Source Normalisation <s> We address the question how citation-based bibliometric indicators can best be normalized to ensure fair comparisons between publications from different scientific fields and different years. In a systematic large-scale empirical analysis, we compare a traditional normalization approach based on a field classification system with three source normalization approaches. We pay special attention to the selection of the publications included in the analysis. Publications in national scientific journals, popular scientific magazines, and trade magazines are not included. Unlike earlier studies, we use algorithmically constructed classification systems to evaluate the different normalization approaches. Our analysis shows that a source normalization approach based on the recently introduced idea of fractional citation counting does not perform well. Two other source normalization approaches generally outperform the classification-system-based normalization approach that we study. Our analysis therefore offers considerable support for the use of source-normalized bibliometric indicators. <s> BIB011
|
The normalisation method just discussed normalised citations against other citations, but an alternative approach was suggested, initially by in their "audience factor", which considers the sources of citations, that is the reference lists of citing papers, rather than citations themselves. This general approach is gaining popularity and is also known as the "citing-side approach" BIB003 , source normalisation (Moed, 2010c) (SNIP), fractional counting of citations ) and a priori normalisation BIB004 ). The essential difference in this approach is that the reference set of journals is not defined in advance, according to WoS or Scopus categories, but rather is defined at the time specifically for the collection of papers being evaluated (whether that is papers from a journal, department, or individual). It consists of all the papers, in the given time window, that cite papers in the target set. Each collection of papers will, therefore, have its own unique reference set and it will be the lists of references from those papers that will be used for normalisation. This approach has obvious advantages -it avoids the use of WoS categories which are ad hoc and outdated BIB007 and it allows for journals that are interdisciplinary and that would therefore be referenced by journals from a range of fields. Having determined the reference set of papers, the methods then differ in how they employ the number of references in calculating a metric. The audience factor BIB003 works at the level of a citing journal. It calculates a weight for citations from that journal based on the ratio between the average number of active references 15 in all journals to the average number of references in the citing journal. This ratio will be larger for journals that have few references compared to the average because they are in less dense citation fields. Citations to the target (cited) papers are then weighted using the calculated weights which should equalise for the citation density of the citing journals. Fractional counting of citations BIB005 BIB010 BIB001 begins at the level of an individual citation and the paper which produced it. Instead of counting each citation as one, it counts it as a fraction of the number of references in the citing paper. This if a citation comes from a paper with m references, the citation will have a value of 1/m. It is then legitimate to add all these fractionated citations to give the total citation value for the cited paper. An advantage of this approach is that statistical significance tests can be performed on the results. One issue is whether all references are included (which Leydesdorff et al. do) or whether only the active references should be counted. The third method is essentially that which underlies the SNIP indicator for journals BIB002 ) which will be discussed in Section 5. In contrast to fractional counting, it forms a ratio of the mean number of citations to the journal to the mean number of references in the citing journals. A later version of SNIP BIB008 used the harmonic mean to calculate the average number of references and in this form it is essentially the same as fractional counting except for an additional factor to take account of papers with no active citations. Some empirical reviews of these approaches have been carried out. BIB009 BIB011 ) compared the three source-normalising methods with the new CWTS crown indicator (MNCS) and concluded that the source normalisation methods were preferable to the field classification approach, and that of them, the audience factor and revised SNIP were best. This was especially noticeable for interdisciplinary journals. The fractional counting method did not fully eliminate disciplinary differences BIB006 ) and also did not account for citation age.
|
A Review of Theory and Practice in Scientometrics <s> Percentile-Based Approaches <s> In bibliometrics, the association of “impact” with central-tendency statistics is mistaken. Impacts add up, and citation curves therefore should be integrated instead of averaged. For example, the journals MIS Quarterly and Journal of the American Society for Information Science and Technology differ by a factor of 2 in terms of their respective impact factors (IF), but the journal with the lower IF has the higher impact. Using percentile ranks (e.g., top-1%, top-10%, etc.), an Integrated Impact Indicator (I3) can be based on integration of the citation curves, but after normalization of the citation curves to the same scale. The results across document sets can be compared as percentages of the total impact of a reference set. Total number of citations, however, should not be used instead because the shape of the citation curves is then not appreciated. I3 can be applied to any document set and any citation window. The results of the integration (summation) are fully decomposable in terms of journals or institutional units such as nations, universities, and so on because percentile ranks are determined at the paper level. In this study, we first compare I3 with IFs for the journals in two Institute for Scientific Information subject categories (“Information Science & Library Science” and “Multidisciplinary Sciences”). The library and information science set is additionally decomposed in terms of nations. Policy implications of this possible paradigm shift in citation impact analysis are specified. © 2011 Wiley Periodicals, Inc. <s> BIB001 </s> A Review of Theory and Practice in Scientometrics <s> Percentile-Based Approaches <s> Percentiles have been established in bibliometrics as an important alternative to mean-based indicators for obtaining a normalized citation impact of publications. Percentiles have a number of advantages over standard bibliometric indicators used frequently: for example, their calculation is not based on the arithmetic mean which should not be used for skewed bibliometric data. This study describes the opportunities and limits and the advantages and disadvantages of using percentiles in bibliometrics. We also address problems in the calculation of percentiles and percentile rank classes for which there is not (yet) a satisfactory solution. It will be hard to compare the results of different percentile-based studies with each other unless it is clear that the studies were done with the same choices for percentile calculation and rank assignment. <s> BIB002 </s> A Review of Theory and Practice in Scientometrics <s> Percentile-Based Approaches <s> Journal Impact Factors (IFs) can be considered historically as the first attempt to normalize citation distributions by using averages over two years. However, it has been recognized that citation distributions vary among fields of science and that one needs to normalize for this. Furthermore, the mean-or any central-tendency statistics-is not a good representation of the citation distribution because these distributions are skewed. Important steps have been taken to solve these two problems during the last few years. First, one can normalize at the article level using the citing audience as the reference set. Second, one can use non-parametric statistics for testing the significance of differences among ratings. A proportion of most-highly cited papers (the top-10% or top-quartile) on the basis of fractional counting of the citations may provide an alternative to the current IF. This indicator is intuitively simple, allows for statistical testing, and accords with the state of the art. <s> BIB003 </s> A Review of Theory and Practice in Scientometrics <s> Percentile-Based Approaches <s> For comparisons of citation impacts across fields and over time, bibliometricians normalize the observed citation counts with reference to an expected citation value. Percentile-based approaches have been proposed as a non-parametric alternative to parametric central-tendency statistics. Percentiles are based on an ordered set of citation counts in a reference set, whereby the fraction of papers at or below the citation counts of a focal paper is used as an indicator for its relative citation impact in the set. In this study, we pursue two related objectives: (1) although different percentile-based approaches have been developed, an approach is hitherto missing that satisfies a number of criteria such as scaling of the percentile ranks from zero (all other papers perform better) to 100 (all other papers perform worse), and solving the problem with tied citation ranks unambiguously. We introduce a new citation-rank approach having these properties, namely P100; (2) we compare the reliability of P100 empirically with other percentile-based approaches, such as the approaches developed by the SCImago group, the Centre for Science and Technology Studies (CWTS), and Thomson Reuters (InCites), using all papers published in 1980 in Thomson Reuters Web of Science (WoS). How accurately can the different approaches predict the long-term citation impact in 2010 (in year 31) using citation impact measured in previous time windows (years 1–30)? The comparison of the approaches shows that the method used by InCites overestimates citation impact (because of using the highest percentile rank when papers are assigned to more than a single subject category) whereas the SCImago indicator shows higher power in predicting the long-term citation impact on the basis of citation rates in early years. Since the results show a disadvantage in this predictive ability for P100 against the other approaches, there is still room for further improvements. <s> BIB004 </s> A Review of Theory and Practice in Scientometrics <s> Percentile-Based Approaches <s> A similarity-oriented approach for deriving reference values used in citation normalization is explored and contrasted with the dominant approach of utilizing database-defined journal sets as a bas ... <s> BIB005
|
We have already mentioned that there is a general statistical problem with metrics that are based on the mean number of citations, which is that citations distributions are always highly skewed and this invalidates the mean as a measure of central tendency; the median is better. There is also the problem of ratios of means discussed above. A non-parametric alternative based on percentiles (an extension of the median) has been suggested for research groups , individual scientists and journals BIB001 . This is also used by the US National Science board in their Science and Engineering Indicators. The method works as follows: 1. For each paper to be evaluated, a reference set of papers published in the same year, of the same type and belonging to the same WoS category is determined. 2. These are rank ordered and split into percentile rank (PR) classes, for example the top 1% (99 th percentile), 5%, 10%, 25%, 50% and below 50%. For each PR, the minimum number of citations necessary to get into the class is noted 16 . Based on its citations, the paper is then assigned to one of the classes. This particular classification is known as 6PR. 3. The procedure is repeated for all the target papers and the results are then summated, giving the overall percentage of papers in each of the PR classes. The resulting distributions can be statistically tested against both the field reference values and against other competitor journals or departments 17 . The particular categories used above are only one possible set BIB002 Science Indicators) and the full 100 percentiles (100PR) BIB004 . This approach provides a lot of information about the 16 There are several technical problems to be dealt with in operationalising these classes BIB002 BIB004 . 17 Using Dunn's test or the Mann-Whitney U test proportions of papers at different levels, but it is still useful to be able to summarise performance in a single value. The suggested method is to calculate a mean of the ranks weighted by the proportion of papers in each. The minimum is 1, if all papers are in the lowest rank; the maximum is 6 if they are all in the top percentile. The field average will be 1.91 -(.01, .04, 05, .15, .25, .50) x (6,5,4,3,2,1) -so a value above that is better than the field average. A variation of this metric has been developed as an alternative to the journal impact factor (JIF) called I3 BIB003 BIB001 . Instead of multiplying the percentile ranks by the proportion of papers in each class, they are multiplied by the actual numbers of papers in each class thus giving a measure that combines productivity with citation impact. In the original, the 100PR classification was used but other ones are equally possible. The main drawback of this method is that it relies on the field definitions in WoS or another database which are unreliable, especially for interdisciplinary journals. It might be possible to combine it with some form of source normalisation BIB005 .
|
A Review of Theory and Practice in Scientometrics <s> Indicators of Journal Quality: The Impact Factor and Other Metrics <s> The importance of a Web page is an inherently subjective matter, which depends on the readers interests, knowledge and attitudes. But there is still much that can be said objectively about the relative importance of Web pages. This paper describes PageRank, a mathod for rating Web pages objectively and mechanically, effectively measuring the human interest and attention devoted to them. We compare PageRank to an idealized random Web surfer. We show how to efficiently compute PageRank for large numbers of pages. And, we show how to apply PageRank to search and to user navigation. <s> BIB001 </s> A Review of Theory and Practice in Scientometrics <s> Indicators of Journal Quality: The Impact Factor and Other Metrics <s> We suggest that a h-type index - equal to h if you have published h papers, each of which has at least h citations - would be a useful supplement to journal impact factors. <s> BIB002 </s> A Review of Theory and Practice in Scientometrics <s> Indicators of Journal Quality: The Impact Factor and Other Metrics <s> ![Graphic][1] ::: ::: ©cartoonbank.com. All Rights Reserved. The integrity of data, and transparency about their acquisition, are vital to science. The impact factor data that are gathered and sold by Thomson Scientific (formerly the Institute of Scientific Information, or ISI) have a strong <s> BIB003 </s> A Review of Theory and Practice in Scientometrics <s> Indicators of Journal Quality: The Impact Factor and Other Metrics <s> A theoretical model of the dependence of Hirsch-type indices on the number of publications and the average citation rate is tested successfully on empirical samples of journal h-indices. <s> BIB004 </s> A Review of Theory and Practice in Scientometrics <s> Indicators of Journal Quality: The Impact Factor and Other Metrics <s> It always begins innocently enough! In the middle of the 19th century, mining and earthmoving were increasingly important enterprises of the industrial revolution. To remove rock and to open mine shafts, an explosive was needed, but nitroglycerine was too unstable for practical use. The Swedish scientist/inventor Alfred Nobel discovered that mixing nitroglycerine with the diatomaceous earth kieselguhr produced a stable explosive product he patented as dynamite, which was quickly adopted by the mining and construction industries. In the early 20th century, the Italian physicist Enrico Fermi, while attempting to understand the structure of atomic nuclei, discovered that nuclei bombarded by neutrons would split and release large amounts of energy. As others have employed these discoveries, both dynamite and nuclear fission have had destructive effects on society that were initially unimaginable by their discoverers. It was only a quarter century after the first nuclear fission bombs that Eugene Garfield, a library scientist and structural linguist from the University of Pennsylvania, discovered a metric that could be used to select journals for inclusion in his new publication Genetics Citation Index (the forerunner of Science Citation Index, which was subsequently commercialized by Garfield’s company Institute for Scientific Information). This metric for journals was named “impact factor” and was to be calculated “based on 2 elements: the numerator, which is the number of citations in the current year to any items published in a journal in the previous 2 years, and the denominator, which is the number of substantive articles (source items) published in the same 2 years.” 1,2 Thus, although the journal impact factor was born innocently enough, just like the examples involving Nobel and Fermi, Garfield’s impact factor is now being used by others in ways that threaten to destroy scientific inquiry as we know it. 3,4 For much of human history (about 200,000 generations), scientists were few in number, often worked in relative isolation, and only communicated findings to close <s> BIB005 </s> A Review of Theory and Practice in Scientometrics <s> Indicators of Journal Quality: The Impact Factor and Other Metrics <s> The application of currently available sophisticated algorithms of citation analysis allows for the incorporation of the “quality” of citations in the evaluation of scientific journals. We sought to compare the newly introduced SCImago journal rank (SJR) indicator with the journal impact factor (IF). We retrieved relevant information from the official Web sites hosting the above indices and their source databases. The SJR indicator is an open-access resource, while the journal IF requires paid subscription. The SJR indicator (based on Scopus data) lists considerably more journal titles published in a wider variety of countries and languages, than the journal IF (based on Web of Science data). Both indices divide citations to a journal by articles of the journal, during a specific time period. However, contrary to the journal IF, the SJR indicator attributes different weight to citations depending on the “prestige” of the citing journal without the influence of journal self-citations; prestige is estimated... <s> BIB006 </s> A Review of Theory and Practice in Scientometrics <s> Indicators of Journal Quality: The Impact Factor and Other Metrics <s> This study examines the use of journal rankings and proposes a new method of measuring IS journal impact based on the Hirsch family of indices (Hirsch 2005; Sidiropoulos et al. 2006). Journal rankings are a very important exercise in academia since they impact tenure and promotion decisions. Current methods employed to rank journal influence are shown to be subjective. We propose that the Hirsch Index (2005) and Contemporary Hirsch Index (Sidiropoulos et al. 2006) based on data from Publish or Perish be adopted as a more objective journal ranking method. To demonstrate the results of using this methodology, it is applied to the “pure MIS” journals ranked by Rainer and Miller (2005). The authors find substantial differences between the scholar rankings and those obtained using the Hirsch family of indices. They also find that the contemporary Hirsch Index allows researchers to identify journals that are rising or declining in influence. <s> BIB007 </s> A Review of Theory and Practice in Scientometrics <s> Indicators of Journal Quality: The Impact Factor and Other Metrics <s> The launching of Scopus and Google Scholar, and methodological developments in Social Network Analysis have made many more indicators for evaluating journals available than the traditional Impact Factor, Cited Half-life, and Immediacy Index of the ISI. In this study, these new indicators are compared with one another and with the older ones. Do the various indicators measure new dimensions of the citation networks, or are they highly correlated among them? Are they robust and relatively stable over time? Two main dimensions are distinguished -- size and impact -- which together shape influence. The H-index combines the two dimensions and can also be considered as an indicator of reach (like Indegree). PageRank is mainly an indicator of size, but has important interactions with centrality measures. The Scimago Journal Ranking (SJR) indicator provides an alternative to the Journal Impact Factor, but the computation is less easy. <s> BIB008 </s> A Review of Theory and Practice in Scientometrics <s> Indicators of Journal Quality: The Impact Factor and Other Metrics <s> This paper explores a new indicator of journal citation impact, denoted as source normalized impact per paper (SNIP). It measures a journal's contextual citation impact, taking into account characteristics of its properly defined subject field, especially the frequency at which authors cite other papers in their reference lists, the rapidity of maturing of citation impact, and the extent to which a database used for the assessment covers the field's literature. It further develops Eugene Garfield's notions of a field's ‘citation potential’ defined as the average length of references lists in a field and determining the probability of being cited, and the need in fair performance assessments to correct for differences between subject fields. A journal's subject field is defined as the set of papers citing that journal. SNIP is defined as the ratio of the journal's citation count per paper and the citation potential in its subject field. It aims to allow direct comparison of sources in different subject fields. Citation potential is shown to vary not only between journal subject categories – groupings of journals sharing a research field – or disciplines (e.g., journals in mathematics, engineering and social sciences tend to have lower values than titles in life sciences), but also between journals within the same subject category. For instance, basic journals tend to show higher citation potentials than applied or clinical journals, and journals covering emerging topics higher than periodicals in classical subjects or more general journals. SNIP corrects for such differences. Its strengths and limitations are critically discussed, and suggestions are made for further research. All empirical results are derived from Elsevier's Scopus. <s> BIB009 </s> A Review of Theory and Practice in Scientometrics <s> Indicators of Journal Quality: The Impact Factor and Other Metrics <s> In this article I study characteristics of the journal impact factor (JIF) computed using a 5-year citation window as compared with the classical JIF computed using a 2-year citation window. Since 2007 ISI-Thomson Reuters has published the new 5-year impact factor in the JCR database. I studied changes in the distribution of JIFs when the citation window was enlarged. The distributions of journals according their 5-year JIFs were very similar all years studied, and were also similar to the distribution according to the 2-year JIFs. In about 72% of journals, the JIF increased when the longer citation window was used. Plots of 5-year JIFs against rank closely followed a beta function with two exponents. Thus, the 5-year JIF seems to behave very similarly to the 2-year JIF. The results also suggest that gains in JIF with the longer citation window tend to distribute similarly in all years. Changes in these gains also tend to distribute similarly from 1 year to the following year. <s> BIB010 </s> A Review of Theory and Practice in Scientometrics <s> Indicators of Journal Quality: The Impact Factor and Other Metrics <s> This paper provides a ranking of 69 marketing journals using a new Hirsch-type index, the hg-index which is the geometric mean of hg. The applicability of this index is tested on data retrieved from Google Scholar on marketing journal articles published between 2003 and 2007. The authors investigate the relationship between the hg-ranking, ranking implied by Thomson Reuters’ Journal Impact Factor for 2008, and rankings in previous citation-based studies of marketing journals. They also test two models of consumption of marketing journals that take into account measures of citing (based on the hg-index), prestige, and reading preference. <s> BIB011 </s> A Review of Theory and Practice in Scientometrics <s> Indicators of Journal Quality: The Impact Factor and Other Metrics <s> A review of Garfield's journal impact factor and its specific implementation as the Thomson Reuters impact factor reveals several weaknesses in this commonly-used indicator of journal standing. Key limitations include the mismatch between citing and cited documents, the deceptive display of three decimals that belies the real precision, and the absence of confidence intervals. These are minor issues that are easily amended and should be corrected, but more substantive improvements are needed. There are indications that the scientific community seeks and needs better certification of journal procedures to improve the quality of published science. Comprehensive certification of editorial and review procedures could help ensure adequate procedures to detect duplicate and fraudulent submissions. <s> BIB012 </s> A Review of Theory and Practice in Scientometrics <s> Indicators of Journal Quality: The Impact Factor and Other Metrics <s> of 45,955 accepted an average of 55.2 articles per journal. By that calculation, the most flagrant offenders may be coercing most of their contribu- tors. However, this calcula- tion does not for variation in the of articles in jour- nals, references article, ciplines. regression analy- ses, <s> BIB013 </s> A Review of Theory and Practice in Scientometrics <s> Indicators of Journal Quality: The Impact Factor and Other Metrics <s> A discipline such as business and management (BM second, for the purpose of normalising citation data as it is well-known that citation rates vary significantly between different disciplines. And third, because journal rankings and lists tend to split their classifications into different subjects—for example, the Association of Business Schools list, which is a standard in the UK, has 22 different fields. Unfortunately, at the moment these are created in an ad-hoc manner with no underlying rigour. The purpose of this paper is to identify possible sub-fields in B&M rigorously based on actual citation patterns. We have examined 450 journals in B&M, which are included in the ISI Web of Science and analysed the cross-citation rates between them enabling us to generate sets of coherent and consistent sub-fields that minimise the extent to which journals appear in several categories. Implications and limitations of the analysis are discussed. <s> BIB014 </s> A Review of Theory and Practice in Scientometrics <s> Indicators of Journal Quality: The Impact Factor and Other Metrics <s> The article critically examines how work is shaped by performance measures. Its specific focus is upon the use of journal lists, rather than the detail of their construction, in conditioning the research activity of academics. It is argued that an effect of the ‘one size fits all’ logic of journal lists is to endorse and cultivate a research monoculture in which particular criteria, favoured by a given list, assume the status of a universal benchmark of performance (‘research quality’). The article demonstrates, with reference to the Association of Business Schools (ABS) ‘Journal Guide’, how use of a journal list can come to dominate and define the focus and trajectory of a field of research, with detrimental consequences for the development of scholarship. <s> BIB015
|
So far, we have considered the impact of individual papers or researchers, but of equal importance is the impact of journals in terms of library's decisions about which journals to take (less important in the age of e-journals), authors' decisions about where to submit their papers, and in subsequent judgements of the quality of the paper. Indeed journal ranking lists such as the UK Association of Business Schools' (ABS) has a huge effect on research behaviour BIB015 . Until recently, the journal impact factor (JIF) has been the pre-eminent measure. This was originally created by as a simple way of choosing journals for their SCI but, once it was routinely produced in WoS (who have copyright to producing it), it became a standard. Garfield recognised its limitations and also recommended a metric called the "cited half-life" which is a measure of how long citations last for. Specifically, it is the median age of papers cited in a particular year, so a journal that has a cited half-life of five years means that 50% of the citations are to papers published in the last five years. JIF is simply the mean citations per paper for a journal over a two year period. For example, the 2014 JIF is the number of citations in 2014 to papers published in a journal in 2012 and 2013, divided by the number of such papers. WoS also published a 5-year JIF because in many disciplines two years is too short a time period. It is generally agreed that the JIF has few benefits for evaluating research, but many deficiencies BIB005 BIB012 . Even Garfield (1998) has warned about its over-use 18 . JIF depends heavily on the research field. As we have already seen, there are large differences in the publishing and citing habits of different disciplines and this is reflected in huge The two-year window. This is a very short time period for many disciplines, especially given the lead time between submitting a paper and having it published which may itself be two years. In management, many journals have a cited half-life of over 10 years while in cell biology it is typically less than 6. The 5-year JIF is better in this respect BIB010 . There is a lack of transparency in the way the JIF is calculated and this casts doubt on the results. BIB005 studied medical journals and could not reproduce the appropriate figures. It is highly dependent on which types of papers are included in the denominator. In 2007, the editors of three prestigious medical journals published a paper questioning the data BIB003 . has also found differences between JIFs calculated in WoS and Scopus for economics resulting from different journal coverage. It is possible for journals to deliberately distort the results by, for example, publishing many review articles which are more highly cited; publishing short reports or book reviews that get cited but are not included in the count of papers; or pressuring authors to gratuitously reference excessive papers from the journal BIB013 . The Journal of the American College of Cardiology, for example, publishes each year an overview of highlights in its previous year so that the IF of this journal is boosted . If used for assessing individual researchers or papers the JIF is unrepresentative ). As Figure 1 shows, the distribution of citations within a journal is highly skewed and so the mean value will be distorted by a few highly cited papers, and not represent the significant number that may never be cited at all. In response to criticisms of the JIF, several more sophisticated metrics have been developed, although the price for sophistication is complexity of calculation and a lack of intuitiveness in what it means. The first metrics we will consider take into account not just the quantity of citations but also their quality in terms of the prestige of the citing journal. They are based on iterative algorithms over a network, like Googles's PageRank, that initially assign all journals an equal amount of prestige and then iterate the solution based on the number of citations (the links) between the journals (nodes) until a steady state is reached. The first such was developed by Pinsky and Narin (1976) but that had calculation problems. Since then, BIB001 and BIB008 The SJR works in a similar way to the Eigenfactor but includes within it a size normalisation factor and so is more akin to the article influence score. Each journal is a node and each directed connection is a normalised value of the number of citations from one journal to another over a three year window. It is normalised by the total number of citations in the citing journal for the year in question. It works in two phases: 1. An un-normalised value of journal prestige is calculated iteratively until a steady state is reached. The value of prestige actually includes three components: A fixed amount for being included in the database (Scopus); an amount dependent on the number of papers the journal produces; a citation amount dependent on the number of citations received, and the prestige of the sources. However, there are a number of arbitrary weighting constants in the calculation. 2. The value from 1., which is size-dependent, is then normalised by the number of published articles and adjusted to give an "easy-to-use" value. Gonzales-Pereira et al. (2010) carried out extensive empirical comparisons with a 3-yr JIF (on Scopus data). The main conclusions were that the two were highly correlated, but the SJR showed that some journals with high JIFs and lower SJRs were indeed gaining citations from less prestigious sources. This was seen most clearly in the computer science field where the top ten journals, based on the two metrics, were entirely different except for the number one, which was clearly a massive outlier (Briefings in Bioinformatics). Values for the JIF are significantly higher than for SJR. Falagas et al. BIB006 ) also compared the SJR favourably with the JIF. There are several limitations of these 2 nd generation measures: the values for "prestige" are difficult to interpret as they are not a mean citation value but only make sense in comparison with others; they are still not normalised for subject areas ; and the subject areas themselves are open to disagreement BIB014 . A further development of the SJR indicator has been produced ) with the refinement that, in weighting the citations according to the prestige of the citing journal, the relatedness of the two journals is also taken into account. An extra term is added based on the cosine of the angle between the co-citation vectors of the journals so that the citations from a journal in a highly related area count for more. It is claimed that this also goes some way towards reducing the disparity of scores between subjects. However, it also makes the indicator even more complex, hard to compute, and less understandable. The h-index can also be used to measure the impact of journals as it can be applied to any collection of cited papers BIB002 BIB004 .Studies have been carried out in several disciplines: marketing BIB011 , economics ), information systems BIB007 and business . The advantages and disadvantages of the h-index for journals are the same as the h-index generally, but it is particularly the case that it is not normalised for different disciplines, and it is also strongly affected by the number of papers published. So a journal that publishes a small number of highly cited papers will be disadvantaged in comparison with one publishing many papers, even if not so highly cited. Google Metrics (part of Google Scholar) uses a 5-year h-index and also shows the median number of citations for those papers in the h core to allow for differences between journals with the same hindex. It has been critiqued by Delgado-López-Cózar and Cabezas-Clavijo (2012). Another recently developed metric that is implemented in Scopus but not WoS is SNIP -source normalised impact per paper BIB009 . This normalises for different fields based on the citingside form of normalisation discussed above, that is, rather than normalising with respected to the citations that a journal receives, it normalises with respect to the number of references in the citing journals. The method proceeds in three stages: 1. First the raw impact per paper (RIP) is calculated for the journal. This is essentially a three year JIF -the total citations from year n to papers in the preceding three years is divided by the number of citable papers. 2. Then the database citation potential for the journal (DCP) is calculated. This is done by finding all the papers in year n that cite papers the journal over the preceding ten years, and then calculating the arithmetic mean of the number of references (to papers in the databaseScopus) in these papers.
|
A Review of Theory and Practice in Scientometrics <s> The DCP is then relativized (RDCP). The DCP is calculated for all journals in the database <s> Impact factors (and similar measures such as the Scimago Journal Rankings) suffer from two problems: (a) citation behavior varies among fields of science and, therefore, leads to systematic differences, and (b) there are no statistics to inform us whether differences are significant. The recently introduced “source normalized impact per paper” indicator of Scopus tries to remedy the first of these two problems, but a number of normalization decisions are involved, which makes it impossible to test for significance. Using fractional counting of citations—based on the assumption that impact is proportionate to the number of references in the citing documents—citations can be contextualized at the paper level and aggregated impacts of sets can be tested for their significance. It can be shown that the weighted impact of Annals of Mathematics (0.247) is not so much lower than that of Molecular Cell (0.386) despite a five-f old difference between their impact factors (2.793 and 13.156, respectively). © 2010 Wiley Periodicals, Inc. <s> BIB001 </s> A Review of Theory and Practice in Scientometrics <s> The DCP is then relativized (RDCP). The DCP is calculated for all journals in the database <s> This paper is a reply to the article "Scopus's Source Normalized Impact per Paper (SNIP) versus a Journal Impact Factor based on Fractional Counting of Citations", published by Loet Leydesdorff and Tobias Opthof (arXiv:1004.3580v2 [cs.DL]). It clarifies the relationship between SNIP and Elsevier's Scopus. Since Leydesdorff and Opthof's description of SNIP is not complete, it indicates four key differences between SNIP and the indicator proposed by the two authors, and argues why the former is more valid than the latter. Nevertheless, the idea of fractional citation counting deserves further exploration. The paper discusses difficulties that arise if one attempts to apply this principle at the level of individual (citing) papers. <s> BIB002 </s> A Review of Theory and Practice in Scientometrics <s> The DCP is then relativized (RDCP). The DCP is calculated for all journals in the database <s> In bibliometrics, the association of “impact” with central-tendency statistics is mistaken. Impacts add up, and citation curves therefore should be integrated instead of averaged. For example, the journals MIS Quarterly and Journal of the American Society for Information Science and Technology differ by a factor of 2 in terms of their respective impact factors (IF), but the journal with the lower IF has the higher impact. Using percentile ranks (e.g., top-1%, top-10%, etc.), an Integrated Impact Indicator (I3) can be based on integration of the citation curves, but after normalization of the citation curves to the same scale. The results across document sets can be compared as percentages of the total impact of a reference set. Total number of citations, however, should not be used instead because the shape of the citation curves is then not appreciated. I3 can be applied to any document set and any citation window. The results of the integration (summation) are fully decomposable in terms of journals or institutional units such as nations, universities, and so on because percentile ranks are determined at the paper level. In this study, we first compare I3 with IFs for the journals in two Institute for Scientific Information subject categories (“Information Science & Library Science” and “Multidisciplinary Sciences”). The library and information science set is additionally decomposed in terms of nations. Policy implications of this possible paradigm shift in citation impact analysis are specified. © 2011 Wiley Periodicals, Inc. <s> BIB003 </s> A Review of Theory and Practice in Scientometrics <s> The DCP is then relativized (RDCP). The DCP is calculated for all journals in the database <s> The SNIP (source normalized impact per paper) indicator is an indicator of the citation impact of scientific journals. The indicator, introduced by Henk Moed in 2010, is included in Elsevier's Scopus database. The SNIP indicator uses a source normalized approach to correct for differences in citation practices between scientific fields. The strength of this approach is that it does not require a field classification system in which the boundaries of fields are explicitly defined. In this paper, a number of modifications that will be made to the SNIP indicator are explained, and the advantages of the resulting revised SNIP indicator are pointed out. It is argued that the original SNIP indicator has some counterintuitive properties, and it is shown mathematically that the revised SNIP indicator does not have these properties. Empirically, the differences between the original SNIP indicator and the revised one turn out to be relatively small, although some systematic differences can be observed. Relations with other source normalized indicators proposed in the literature are discussed as well. <s> BIB004 </s> A Review of Theory and Practice in Scientometrics <s> The DCP is then relativized (RDCP). The DCP is calculated for all journals in the database <s> Journal Impact Factors (IFs) can be considered historically as the first attempt to normalize citation distributions by using averages over two years. However, it has been recognized that citation distributions vary among fields of science and that one needs to normalize for this. Furthermore, the mean-or any central-tendency statistics-is not a good representation of the citation distribution because these distributions are skewed. Important steps have been taken to solve these two problems during the last few years. First, one can normalize at the article level using the citing audience as the reference set. Second, one can use non-parametric statistics for testing the significance of differences among ratings. A proportion of most-highly cited papers (the top-10% or top-quartile) on the basis of fractional counting of the citations may provide an alternative to the current IF. This indicator is intuitively simple, allows for statistical testing, and accords with the state of the art. <s> BIB005 </s> A Review of Theory and Practice in Scientometrics <s> The DCP is then relativized (RDCP). The DCP is calculated for all journals in the database <s> The modified SNIP indicator of Elsevier, as recently explained by Waltman et al. (2013) in this journal, solves some of the problems which Leydesdorff & Opthof (2010 and 2011) indicated in relation to the original SNIP indicator (Moed, 2010 and 2011). The use of an arithmetic average, however, remains unfortunate in the case of scientometric distributions because these can be extremely skewed (Seglen, 1992 and 1997). The new indicator cannot (or hardly) be reproduced independently when used for evaluation purposes, and remains in this sense opaque from the perspective of evaluated units and scholars. <s> BIB006
|
and the median value is found. Then RDCP j = DCP j /Median DCP. Thus a field that has many references will have an RDCP above 1. 4. Finally, SNIP j = RIP j / RDCP j The result is that journals in fields that have a high citation potential will have their RIP reduced, and vice versa for fields with low citation potential. This is an innovative measure both because it normalises for both number of publications and field, and because the set of reference journals are specific to each journal rather than being defined beforehand somewhat arbitrarily. Moed presents empirical evidence from the sciences that the subject normalisation does work even at the level of pairs of journals in the same field. Also, because it only uses references to papers within the database, it corrects for coverage differences -a journal with low database coverage will have a lower DCP and thus a higher value of SNIP. A modified version of SNIP has recently been introduced BIB004 to overcome certain technical limitations, and also in response to criticism from BIB001 BIB002 ) who favour a fractional citation approach. The modified version involves two main changes: i) the mean number of references (DCP), but not the RIP, is now calculated using the harmonic mean rather than the arithmetic mean. ii) The relativisation of the DCP to the overall median DCP is now omitted entirely, now SNIP = RIP/DCP. Mingers (2014) has pointed out two problems with the revised SNIP. First, because the value is no longer relativized it does not bear any particular relation to either the RIP for a journal, or the average number of citations/references in the database which makes it harder to interpret. Second, the harmonic mean, unlike the arithmetic, is sensitive to the variability of values. The less even the numbers of references, the lower will be the harmonic mean and this can make a significant difference to the value of SNIP which seems inappropriate. There is also a more general problem with these sophisticated metrics that work across a whole database, and that is that the results cannot be easily replicated as most researchers do not have sufficient access to the databases BIB006 . Two other alternatives to the JIF have been suggested BIB005 -fractional counting of citations, which is similar in principle to SNIP, and the use of non-parametric statistics such as percentiles which avoids using means which are inappropriate with highly skewed data. A specific metric, based on percentiles, called I3 has been proposed by BIB003 which combines relative citation impact with productivity in terms of the numbers of papers but is normalised through the use of percentiles (see Section 4.3 for more explanation).
|
A Review of Theory and Practice in Scientometrics <s> Visualizing and mapping science <s> A new form of document coupling called co-citation is defined as the frequency with which two documents are cited together. The co-citation frequency of two scientific papers can be determined by comparing lists of citing documents in the Science Citation Index and counting identical entries. Networks of co-cited papers can be generated for specific scientific specialties, and an example is drawn from the literature of particle physics. Co-citation patterns are found to differ significantly from bibliographic coupling patterns, but to agree generally with patterns of direct citation. Clusters of co-cited papers provide a new way to study the specialty structure of science. They may provide a new approach to indexing and to the creation of SDI profiles. <s> BIB001 </s> A Review of Theory and Practice in Scientometrics <s> Visualizing and mapping science <s> Abstract Measurement of the effectiveness of science policies is analyzed as a multi-level problem. Journal-journal citations are discussed as a potential candidate for a domain beyond the control of policy-makers and authors or research groups and therefore may function as a relatively stable and easily accessible baseline for the calibration of outputs and outcomes of science policy. A method is developed, usingSCPsJCRs which is then applied to the two cases of water pollution and humanisation of labor. This method can also be used as a simple indicator for the development of journal-journal citation patterns over time. <s> BIB002 </s> A Review of Theory and Practice in Scientometrics <s> Visualizing and mapping science <s> In principle, a scientometric transaction matrix can be modelled by assuming that the number of transactions is the result of independent row and column contributions. More often one is primarily interested in the cross-structural relations between the participating entities, whereas the row and column margin~tls are of lesser or no importance. The values of the residuals after fitting an independence model to a complete transaction matrix can be analyzed by correspondence analysis to investigate the structure of the transactions between the rows and columns, after correcting for their marginal t~equencies. Recently a modification of correspondence analysis has been developed, quasi-correspondence analysis, which seems quite suitable for the analysis of citation-based transaction matrices which are incomplete or in which the incorporation of certain transactions may seem inappropriate, An illustration of both data analysis-techniques will be given using a journal-to-journal citation matrix. <s> BIB003 </s> A Review of Theory and Practice in Scientometrics <s> Visualizing and mapping science <s> It is shown that the h-index on the one hand, and the A- and g-indices on the other, measure different things. The A-index, however, seems overly sensitive to one extremely highly cited article. For this reason it would seem that the g-index is the more useful of the two. As to the h- and the g-index: they do measure different aspects of a scientist’s publication list. Certainly the h-index does not tell the full story, and, although a more sensitive indicator than the h-index, neither does the g-index. Taken together, g and h present a concise picture of a scientist’s achievements in terms of publications and citations. <s> BIB004 </s> A Review of Theory and Practice in Scientometrics <s> Visualizing and mapping science <s> We thank Kevin Boyack, Loet Leydesdorff, and Antoine Schoen for open and fruitful discussions about this paper. This research was undertaken largely at Georgia Tech drawing on support from the U.S. National Science Foundation (NSF) through the Center for Nanotechnology in Society (Arizona State University; Award No. 0531194); and NSF Award No. 1064146 ("Revealing Innovation Pathways: Hybrid Science Maps for Technology Assessment and Foresight"). Part of this research was also undertaken in collaboration with the Center for Nanotechnology in Society, University of California Santa Barbara (NSF Awards No. 0938099 and No. 0531184). The findings and observations contained in this paper are those of the authors and do not necessarily reflect the views of the US National Science Foundation. <s> BIB005
|
In addition to its use as an instrument for the evaluation of impact, citations can also be considered as an operationalization of a core process in scholarly communication, namely, referencing. Citations refer to texts other than the one that contains the cited references, and thus induce a dynamic vision of the sciences developing as networks of relations . The development of co-citation analysis BIB001 and co-word analysis several research teams began to use this data for visualization purposes using multidimensional scaling and other such techniques BIB002 BIB003 . BIB004 proposed to use alluvial maps for showing the dynamics of science. Rafols et al.. (2010) first proposed to use these "global" maps as backgrounds for overlays that inform the user about the position of specific sets of documents, analogously to overlaying institutional address information on geographical maps like Google Maps. More recently, these techniques have been further refined, using both journal and patent data BIB005
|
A Review of Theory and Practice in Scientometrics <s> Visualisation techniques <s> Three general methods for obtaining measures of diversity within a population and dissimilarity between populations are discussed. One is based on an intrinsic notion of dissimilarity between individuals and others make use of the concepts of entropy and discrimination. The use of a diversity measure in apportionment of diversity between and within populations is discussed. <s> BIB001 </s> A Review of Theory and Practice in Scientometrics <s> Visualisation techniques <s> We propose and study a set of algorithms for discovering community structure in networks-natural divisions of network nodes into densely connected subgroups. Our algorithms all share two definitive features: first, they involve iterative removal of edges from the network to split it into communities, the edges removed being identified using any one of a number of possible "betweenness" measures, and second, these measures are, crucially, recalculated after each removal. We also propose a measure for the strength of the community structure found by our algorithms, which gives us an objective metric for choosing the number of communities into which a network should be divided. We demonstrate that our algorithms are highly effective at discovering community structure in both computer-generated and real-world network data, and show how they can be used to shed light on the sometimes dauntingly complex structure of networked systems. <s> BIB002 </s> A Review of Theory and Practice in Scientometrics <s> Visualisation techniques <s> We propose a simple method to extract the community structure of large networks. Our method is a heuristic method that is based on modularity optimization. It is shown to outperform all other known community detection methods in terms of computation time. Moreover, the quality of the communities detected is very good, as measured by the so-called modularity. This is shown first by identifying language communities in a Belgian mobile phone network of 2 million customers and by analysing a web graph of 118 million nodes and more than one billion links. The accuracy of our algorithm is also verified on ad hoc modular networks. <s> BIB003 </s> A Review of Theory and Practice in Scientometrics <s> Visualisation techniques <s> A citation-based indicator for interdisciplinarity has been missing hitherto among the set of available journal indicators. In this study, we investigate network indicators (betweenness centrality), unevenness indicators (Shannon entropy, the Gini coefficient), and more recently proposed Rao–Stirling measures for “interdisciplinarity.” The latter index combines the statistics of both citation distributions of journals (vector-based) and distances in citation networks among journals (matrix-based). The effects of various normalizations are specified and measured using the matrix of 8207 journals contained in the Journal Citation Reports of the (Social) Science Citation Index 2008. Betweenness centrality in symmetrical (1-mode) cosine-normalized networks provides an indicator outperforming betweenness in the asymmetrical (2-mode) citation network. Among the vector-based indicators, Shannon entropy performs better than the Gini coefficient, but is sensitive to size. Science and Nature, for example, are indicated at the top of the list. The new diversity measure provides reasonable results when (1−cosine) is assumed as a measure for the distance, but results using Euclidean distances were difficult to interpret. <s> BIB004
|
The systems view of multidimensional scaling (MDS) is deterministic, whereas the graph-analytic approach can also begin with a random or arbitrary choice of a starting point. Using MDS, the network is first conceptualized as a multi-dimensional space that is then reduced stepwise to lower dimensionality. At each step, the stress increases. Kruskall's stress function is formulated as follows: In this formula ||x i -x j || is equal to the distance on the map, while the distance measure d ij can be, for example, the Euclidean distance in the data under study. One can use MDS to illustrate factor-analytic results in tables, but in this case the Pearson correlation is used as the similarity criterion. Spring-embedded or force-based algorithms can be considered as a generalization of MDS, but were inspired by developments in graph theory during the 1980s. were the first to reformulate the problem of achieving target distances in a network in terms of energy optimization. They formulated the ensuing stress in the graphical representation as follows: The ensuing difference at the conceptual level is that spring-embedding is a graph-theoretical concept developed for the topology of a network. The weighting is achieved for each individual link. MDS operates on the multivariate space as a system, and hence refers to a different topology. In the multivariate space, two points can be close to each other without entertaining a relationship. For example, they can be close or distanced in terms of the correlation between their patterns of relationships. In the network topology, Euclidean distances and geodesics (shortest distances) are conceptually more meaningful than correlation-based measures. In the vector space, correlation analysis (factor analysis, Technically, one can also input a cosine-normalized matrix into a spring-embedded algorithm. The value of (1 -cosine) can then be considered as a distance in the vector space BIB004 ). BIB002 developed an algorithm in graph theory that searches for (latent) community structures in networks of observable relations. An objective function for the decomposition is recursively minimized and thus a "modularity" Q can be measured (and normalized between zero and one). BIB003 Figure 2 . In Figure 2 we can see some sensible groupings -for example transportation/scheduling, optimization/programming, decision analysis, performance measurement and a fifth around management/application areas. Figure 3 shows the 613 journals that are most highly cited in the same 505 EJOR papers (12, 172 citations between them) but overlaid on to a global map of science . These cited sources can, for example, be considered as an operationalization of the knowledge base on which these articles draw. It can be seen that, apart from the main area around OR and management, 21 Using a routine available at http://www.leydesdorff.net/software/ti there is significant citation to the environmental sciences, chemical engineering, and biomedicine. Rao-Stirling diversity -a measure for the interdisciplinarity of this knowledge base BIB001 ) -however, is low (0.1187). In other words, citation within the specialty prevails. In summary, the visualizations enable us to represent the current state of the field (Figure 2 ), its knowledge base (Figure 3) , and its relevant environments (Figure 4 ). Second-order visualization programs available at the internet such as VOSviewer and CitNetExplorer 22 enable the user to automatically generate several of these visualizations from data downloaded from WoS or Scopus. One can also envisage making movies from this data. These networks evolve over time and the diagrams can be animated -see, for example: http://www.leydesdorff.net/journals/nanotech/ or other ones at http://www.leydesdorff/visone for an overview and instruction.
|
A Review of Theory and Practice in Scientometrics <s> Evaluation and Policy <s> Peer review can be performed successfully only if those involved have a clear idea as to its fundamental purpose. Most authors of articles on the subject assume that the purpose of peer review is quality control. This is an inadequate answer. The fundamental purpose of peer review in the biomedical sciences must be consistent with that of medicine itself, to cure sometimes, to relieve often, to comfort always. Peer review must therefore aim to facilitate the introduction into medicine of improved ways of curing, relieving, and comforting patients. The fulfillment of this aim requires both quality control and the encouragement of innovation. If an appropriate balance between the two is lost, then peer review will fail to fulfill its purpose. <s> BIB001 </s> A Review of Theory and Practice in Scientometrics <s> Evaluation and Policy <s> SummaryA system of input, output, and efficiency indicators is sketched out, with each indicator related to basic research, applied research, and experimental development. Mainly, this scheme is inspired by empirical innovation economics (represented in Germany, e.g., by H. Grupp) and by “advanced bibliometrics' and scientometrics (profiled by van Raan and others). After considering strengths and weaknesses of some of the indicators, possible additional “entry points' for institutions of information delivery are examined, such contributing to an enrichment of existing indicators. And to a “Nationalökonomik des Geistes', requested from librarians in the twenties of the last century by A. von Harnack. <s> BIB002 </s> A Review of Theory and Practice in Scientometrics <s> Evaluation and Policy <s> This paper addresses research performance monitoring of the social sciences and the humanities using citation analysis. Main differences in publication and citation behavior between the (basic) sciences and the social sciences and humanities are outlined. Limitations of the (S)SCI and A&HCI for monitoring research performance are considered. For research performance monitoring in many social sciences and humanities, the methods used in science need to be extended. A broader range of both publications (including non-ISI journals and monographs) and citation indicators (including non-ISI reference citation values) is needed. Three options for bibliometric monitoring are discussed. <s> BIB003 </s> A Review of Theory and Practice in Scientometrics <s> Evaluation and Policy <s> The paper discusses the strengths and limitations of ‘metrics’ and peer review in large-scale evaluations of scholarly research performance. A real challenge is to combine the two methodologies in such a way that the strength of the first compensates for the limitations of the second, and vice versa. It underlines the need to systematically take into account the unintended effects of the use of metrics. It proposes a set of general criteria for the proper use of bibliometric indicators within peer-review processes, and applies these to a particular case: the UK Research Assessment Exercise (RAE). Copyright , Beech Tree Publishing. <s> BIB004 </s> A Review of Theory and Practice in Scientometrics <s> Evaluation and Policy <s> The aim of the paper is to control the reliability of peer review when evaluating academic research in the Three-Year Research Assessment Exercise developed in Italy. Our analysis covers four disciplinary sectors: chemistry, biology, humanities and economics. The results provide evidence that highlights strengths and weaknesses of peer review for judging the quality of the academic research in different fields of science, vis-a-vis bibliometric indicators. Moreover, some basic features of the evaluation process are discussed, to understand their usefulness for reinforcing the effectiveness of the peers' final outcome. Copyright , Beech Tree Publishing. <s> BIB005 </s> A Review of Theory and Practice in Scientometrics <s> Evaluation and Policy <s> A longitudinal analysis of UK science covering almost 20 years revealed in the years prior to a Research Assessment Exercise (RAE 1992, 1996 and 2001) three distinct bibliometric patterns, that can be interpreted in terms of scientists’ responses to the principal evaluation criteria applied in a RAE. When in the RAE 1992 total publications counts were requested, UK scientists substantially increased their article production. When a shift in evaluation criteria in the RAE 1996 was announced from ‘quantity’ to ‘quality’, UK authors gradually increased their number of papers in journals with a relatively high citation impact. And during 1997–2000, institutions raised their number of active research staff by stimulating their staff members to collaborate more intensively, or at least to co-author more intensively, although their joint paper productivity did not. This finding suggests that, along the way towards the RAE 2001, evaluated units in a sense shifted back from ‘quality’ to ‘quantity’. The analysis also observed a slight upward trend in overall UK citation impact, corroborating conclusions from an earlier study. The implications of the findings for the use of citation analysis in the RAE are briefly discussed. <s> BIB006 </s> A Review of Theory and Practice in Scientometrics <s> Evaluation and Policy <s> Evaluations of research quality in universities are now widely used in the advanced economies. The UK's Research Assessment Exercise (RAE) is the most highly developed of these research evaluations. This article uses the results from the 2001 RAE in political science to assess the utility of citations as a measure of outcome, relative to other possible indicators. The data come from the 4,400 submissions to the RAE political science panel. The 28,128 citations analysed relate not only to journal articles, but to all submitted publications – including authored and edited books and book chapters. The results show that citations are the most important predictor of the RAE outcome, followed by whether or not a department had a representative on the RAE panel. The results highlight the need to develop robust quantitative indicators to evaluate research quality which would obviate the need for a peer evaluation based on a large committee. Bibliometrics should form the main component of such a portfolio of quant... <s> BIB007 </s> A Review of Theory and Practice in Scientometrics <s> Evaluation and Policy <s> Ageing of publications, percentage of self-citations, and impact vary from journal to journal within fields of science. The assumption that citation and publication practices are homogenous within specialties and fields of science is invalid. Furthermore, the delineation of fields and among specialties is fuzzy. Institutional units of analysis and persons may move between fields or span different specialties. The match between the citation index and institutional profiles varies among institutional units and nations. The respective matches may heavily affect the representation of the units. Non-ISI journals are increasingly cornered into "transdisciplinary" Mode-2 functions with the exception of specialist journals publishing in languages other than English. An "externally cited impact factor" can be calculated for these journals. The citation impact of non-ISI journals will be demonstrated using Science and Public Policy as the example. <s> BIB008 </s> A Review of Theory and Practice in Scientometrics <s> Evaluation and Policy <s> This article analyzes the effect of interdisciplinarity on the scientific impact of individual articles. Using all the articles published in Web of Science in 2000, we define the degree of interdisciplinarity of a given article as the percentage of its cited references made to journals of other disciplines. We show that although for all disciplines combined there is no clear correlation between the level of interdisciplinarity of articles and their citation rates, there are nonetheless some disciplines in which a higher level of interdisciplinarity is related to a higher citation rates. For other disciplines, citations decline as interdisciplinarity grows. One characteristic is visible in all disciplines: Highly disciplinary and highly interdisciplinary articles have a low scientific impact. This suggests that there might be an optimum of interdisciplinarity beyond which the research is too dispersed to find its niche and under which it is too mainstream to have high impact. Finally, the relationship between interdisciplinarity and scientific impact is highly determined by the citation characteristics of the disciplines involved: Articles citing citation-intensive disciplines are more likely to be cited by those disciplines and, hence, obtain higher citation scores than would articles citing non-citation-intensive disciplines. © 2010 Wiley Periodicals, Inc. <s> BIB009 </s> A Review of Theory and Practice in Scientometrics <s> Evaluation and Policy <s> Both funding agencies and scholars in science studies have become increasingly concerned with how to define and identify interdisciplinarity in research. The task is tricky, since the complexity of interdisciplinary research defies a single definition. Our study tackles this challenge by demonstrating a new typology and qualitative indicators for analyzing interdisciplinarity in research documents. The proposed conceptual framework attempts to fulfill the need for a robust and nuanced approach that is grounded in deeper knowledge of interdisciplinarity. As an example of using the framework, we discuss our empirical investigation of research proposals funded by a national funding agency in Finland. <s> BIB010 </s> A Review of Theory and Practice in Scientometrics <s> Evaluation and Policy <s> A citation-based indicator for interdisciplinarity has been missing hitherto among the set of available journal indicators. In this study, we investigate network indicators (betweenness centrality), unevenness indicators (Shannon entropy, the Gini coefficient), and more recently proposed Rao–Stirling measures for “interdisciplinarity.” The latter index combines the statistics of both citation distributions of journals (vector-based) and distances in citation networks among journals (matrix-based). The effects of various normalizations are specified and measured using the matrix of 8207 journals contained in the Journal Citation Reports of the (Social) Science Citation Index 2008. Betweenness centrality in symmetrical (1-mode) cosine-normalized networks provides an indicator outperforming betweenness in the asymmetrical (2-mode) citation network. Among the vector-based indicators, Shannon entropy performs better than the Gini coefficient, but is sensitive to size. Science and Nature, for example, are indicated at the top of the list. The new diversity measure provides reasonable results when (1−cosine) is assumed as a measure for the distance, but results using Euclidean distances were difficult to interpret. <s> BIB011 </s> A Review of Theory and Practice in Scientometrics <s> Evaluation and Policy <s> A well-designed and comprehensive citation index for the social sciences and humanities has many potential uses, but has yet to be realised. Significant parts of the scholarly production in these areas are not published in international journals, but in national scholarly journals, in book chapters or in monographs. The potential for covering these literatures more comprehensively can now be investigated empirically using a complete publication output data set from the higher education sector of an entire country (Norway). We find that while the international journals in the social sciences and humanities are rather small and more dispersed in specialties, representing a large but not unlimited number of outlets, the domestic journal publishing, as well as book publishing on both the international and domestic levels, show a concentration of many publications in few publication channels. These findings are promising for a more comprehensive coverage of the social sciences and humanities. <s> BIB012 </s> A Review of Theory and Practice in Scientometrics <s> Evaluation and Policy <s> This article studies publication patterns in the social sciences and humanities (SSH) in Flanders and Norway using two databases that both cover all SSH peer-reviewed journal articles by university scholars for the period 2005--9. The coverage of journal articles by the Web of Science (WoS) and the proportion of articles published in English are studied in detail applying the same methodologies to both databases. The study of WoS coverage and language use is chosen because the performance-based funding systems that are in place in both countries have given different emphasis to publishing in WoS covered journals. The results show very similar, almost identical evolutions in the use of English as a publication language. The proportion of articles covered by the WoS, however, is stable for Norway but has increased rapidly for Flanders. This finding shows that the parameters used in a performance-based funding system may influence the publishing patterns of researchers. Copyright The Author 2012. Published by Oxford University Press. All rights reserved. For Permissions, please email: [email protected], Oxford University Press. <s> BIB013 </s> A Review of Theory and Practice in Scientometrics <s> Evaluation and Policy <s> This paper presents an empirical analysis of two different methodologies for calculating national citation indicators: whole counts and fractionalised counts. The aim of our study is to investigate the effect on relative citation indicators when citations to documents are fractionalised among the authoring countries. We have performed two analyses: a time series analysis of one country and a cross-sectional analysis of 23 countries. The results show that all countries’ relative citation indicators are lower when fractionalised counting is used. Further, the difference between whole and fractionalised counts is generally greatest for the countries with the highest proportion of internationally co-authored articles. In our view there are strong arguments in favour of using fractionalised counts to calculate relative citation indexes at the national level, rather than using whole counts, which is the most common practice today. <s> BIB014 </s> A Review of Theory and Practice in Scientometrics <s> Evaluation and Policy <s> This study provides quantitative evidence on how the use of journal rankings can disadvantage interdisciplinary research in research evaluations. Using publication and citation data, it compares the degree of interdisciplinarity and the research performance of a number of Innovation Studies units with that of leading Business & Management Schools (BMS) in the UK. On the basis of various mappings and metrics, this study shows that: (i) Innovation Studies units are consistently more interdisciplinary in their research than Business & Management Schools; (ii) the top journals in the Association of Business Schools’ rankings span a less diverse set of disciplines than lower-ranked journals; (iii) this results in a more favourable assessment of the performance of Business & Management Schools, which are more disciplinary-focused. This citation-based analysis challenges the journal ranking-based assessment. In short, the investigation illustrates how ostensibly ‘excellence-based’ journal rankings exhibit a systematic bias in favour of mono-disciplinary research. The paper concludes with a discussion of implications of these phenomena, in particular how the bias is likely to affect negatively the evaluation and associated financial resourcing of interdisciplinary research organisations, and may result in researchers becoming more compliant with disciplinary authority over time. <s> BIB015 </s> A Review of Theory and Practice in Scientometrics <s> Evaluation and Policy <s> There is an overall perception of increased interdisciplinarity in science, but this is difficult to confirm quantitatively owing to the lack of adequate methods to evaluate subjective phenomena. This is no different from the difficulties in establishing quantitative relationships in human and social sciences. In this paper we quantified the interdisciplinarity of scientific journals and science fields by using an entropy measurement based on the diversity of the subject categories of journals citing a specific journal. The methodology consisted in building citation networks using the Journal Citation Reports® database, in which the nodes were journals and edges were established based on citations among journals. The overall network for the 11-year period (1999–2009) studied was small-world and followed a power-law with exponential cutoff distribution with regard to the in-strength. Upon visualizing the network topology an overall structure of the various science fields could be inferred, especially their interconnections. We confirmed quantitatively that science fields are becoming increasingly interdisciplinary, with the degree of interdisplinarity (i.e. entropy) correlating strongly with the in-strength of journals and with the impact factor. <s> BIB016 </s> A Review of Theory and Practice in Scientometrics <s> Evaluation and Policy <s> We study the problem of normalizing citation impact indicators for differences in citation practices across scientific fields. Normalization of citation impact indicators is usually done based on a field classification system. In practice, the Web of Science journal subject categories are often used for this purpose. However, many of these subject categories have a quite broad scope and are not sufficiently homogeneous in terms of citation practices. As an alternative, we propose to work with algorithmically constructed classification systems. We construct these classification systems by performing a large-scale clustering of publications based on their citation relations. In our analysis, 12 classification systems are constructed, each at a different granularity level. The number of fields in these systems ranges from 390 to 73,205 in granularity levels 1 to 12. This contrasts with the 236 subject categories in the WoS classification system. Based on an investigation of some key characteristics of the 12 classification systems, we argue that working with a few thousand fields may be an optimal choice. We then study the effect of the choice of a classification system on the citation impact of the 500 universities included in the 2013 edition of the CWTS Leiden Ranking. We consider both the MNCS and the PPtop 10% indicator. Globally, for all the universities taken together citation impact indicators generally turn out to be relatively insensitive to the choice of a classification system. Nevertheless, for individual universities, we sometimes observe substantial differences between indicators normalized based on the journal subject categories and indicators normalized based on an appropriately chosen algorithmically constructed classification system. <s> BIB017
|
As we said in the introduction, scientometrics has come to prominence because of its use in the evaluation and management of research performance, whether at the level of the researcher, research group, institution or journal The traditional method of research evaluation was peer review BIB004 . However, this has many drawbacks -it is very time consuming and costly , subject to many biases and distortions BIB001 , generally quite opaque (panel members in the 2008 UK RAE were ordered to destroy all notes for fear of litigation) BIB005 and limited in the extent to which it actually provides wide-ranging and detailed information BIB007 BIB002 ). Abramo and D'Angelo (2011) compared informed peer review (including the UK RAE) with bibliometrics on the natural and formal sciences in Italy and concluded that bibliometrics were clearly superior across a range of criteria -accuracy, robustness, validity, functionality, time and cost. They recognized that there were problems in the social sciences and humanities where citation data is often not available. The effective use of bibliometrics has a number of requirements, not all of which are currently in place. First, one needs robust and comprehensive data. As we have already seen, the main databases are reliable but their coverage is limited especially in the humanities and social sciences and they need to enlarge their scope to cover all forms of research outputs BIB008 . Google Scholar is more comprehensive, but unreliable and non-transparent. At this time, full bibliometric evaluation is feasible in science and some areas of social science, but not in the humanities or some areas of technology BIB003 . suggest that nations should routinely collect data on all the publications published within its institutions so that it is scrutinised and available on demand rather than having to be collected anew each time a research evaluation occurs BIB013 BIB012 . Second, one needs suitable metrics that measure what is important in as unbiased way as possible. These should not be crude ones such as simple counts of citations or papers, the h-index (although this has its advantages) or journal impact factors but more sophisticated ones that take into account the differences in citation practices across different disciplines as has been discussed in Section 4. This is 25 http://www.shanghairanking.com/World-University-Rankings-2014/UK.html 26 http://www.leidenranking.com/ currently an area of much debate with a range of possibilities . The traditional crown indicator (now MNCS) is subject to criticisms concerning the use of the mean on highly cited data and also on the use of WoS field categories BIB017 . There are source normalised alternatives such as SNIP or fractional counting BIB014 and metrics that include the prestige of the citing journals such as SJR. There are also moves towards non-parametric statistics based on percentiles. One dilemma is that the more sophisticated the metrics become, the less transparent and harder to replicate they are. A third area for consideration is inter-or trans-disciplinary work, and work that is more practical and practitioner oriented. How would this be affected by a move towards bibliometrics? There is currently little research in this area BIB009 although BIB015 found a systematic bias in research evaluation against interdisciplinary research in the field of business and management. Indeed, bibliometrics is still at the stage of establishing reliable and feasible methods for defining and measuring interdisciplinarity (Wagner et al., 2011) . BIB010 developed a typology and indicators to be applied to research proposals, and potentially research papers as well; BIB011 have developed citation-based metrics to measure the interdisciplinarity of journals; and Silva et al. BIB016 evaluated the relative interdisciplinarity of science fields using entropy measures. Fourth, we must recognise, and try to minimise, the fact that the act of measuring inevitably changes the behaviour of the people being measured. So, citation-based metrics will lead to practices, legitimate and illegitimate, to increase citations; an emphasis on 4* journals leads to a lack of innovation and a reinforcement of the status quo. For example, BIB006 detected significant patterns of response to UK research assessment metrics, with an increase in total publications after 1992 when numbers of papers were required; a shift to journals with higher citations after 1996 when quality was emphasised; and then in increase in the apparent number of research active staff through greater collaboration during 1997-2000. Michels and Schmoch (2014) found that German researchers changed their behaviour to aim for more US-based high impact journals in order to increase their citations. Fifth, we must be aware that often problems are caused not by the data or metrics themselves, but by their inappropriate use either by academics or by administrators BIB002 . There is often a desire for "quick and dirty" results and so simple measures such as the h-index or the JIF are used indiscriminately without due attention being paid to their limitations and biases. This also reminds us that there are ethical issues in the use of bibliometrics for research evaluation and Furner (2014) has developed a framework for evaluation that includes ethical dimensions.
|
A Review of Theory and Practice in Scientometrics <s> Alternative metrics <s> For institutional repositories, alternative metrics reflecting online activity present valuable indicators of interest in their holdings that can supplement traditional usage statistics. A variable mix of built-in metrics is available through popular repository platforms: Digital Commons, DSpace and EPrints. These may include download counts at the collection and/or item level, search terms, total and unique visitors, page views and social media and bookmarking metrics; additional data may be available with special plug-ins. Data provide different types of information valuable for repository managers, university administrators and authors. They can reflect both scholarly and popular impact, show readership, reflect an institution's output, justify tenure and promotion and indicate direction for collection management. Practical considerations for implementing altmetrics include service costs, technical support, platform integration and user interest. Altmetrics should not be used for author ranking or comparison, and altmetrics sources should be regularly reevaluated for relevance. <s> BIB001 </s> A Review of Theory and Practice in Scientometrics <s> Alternative metrics <s> Today, it is not clear how the impact of research on other areas of society than science should be measured. While peer review and bibliometrics have become standard methods for measuring the impact of research in science, there is not yet an accepted framework within which to measure societal impact. Alternative metrics (called altmetrics to distinguish them from bibliometrics) are considered an interesting option for assessing the societal impact of research, as they offer new ways to measure (public) engagement with research output. Altmetrics is a term to describe web-based metrics for the impact of publications and other scholarly material by using data from social media platforms (e.g. Twitter or Mendeley). This overview of studies explores the potential of altmetrics for measuring societal impact. It deals with the definition and classification of altmetrics. Furthermore, their benefits and disadvantages for measuring impact are discussed. <s> BIB002
|
Although citations still form the core of scientometrics, the dramatic rise of social media has opened up many more channels for recording the impact of academic research BIB002 BIB001 . These go under the name of "altmetrics" both as a field, and as particular alternative metrics
|
A Review of Theory and Practice in Scientometrics <s> The shape of the discipline <s> Thank you very much for downloading citation indexing its theory and application in science technology and humanities. As you may know, people have search hundreds times for their favorite novels like this citation indexing its theory and application in science technology and humanities, but end up in infectious downloads. Rather than enjoying a good book with a cup of tea in the afternoon, instead they juggled with some infectious virus inside their computer. <s> BIB001 </s> A Review of Theory and Practice in Scientometrics <s> The shape of the discipline <s> The claim that co-citation analysis is a useful tool to map subject-matter specialties of scientific research in a given period, is examined. A method has been developed using quantitative analysis of content-words related to publications in order to: (1) study coherence of research topics within sets of publications citing clusters, i.e., (part of) the “current work” of a specialty; (2) to study differences in research topics between sets of publications citing different clusters; and (3) to evaluate recall of “current work” publications concerning the specialties identified by co-citation analysis. Empirical support is found for the claim that co-citation analysis identifies indeed subject-matter specialties. However, different clusters may identify the same specialty, and results are far from complete concerning the identified “current work.” These results are in accordance with the opinion of some experts in the fields. Low recall of co-citation analysis concerning the “current work” of specialties is shown to be related to the way in which researchers build their work on earlier publications: the “missed” publications equally build on very recent earlier work, but are less “consensual” and/or less “attentive” in their referencing practice. Evaluation of national research performance using co-citation analysis appears to be biased by this “incompleteness.” <s> BIB002 </s> A Review of Theory and Practice in Scientometrics <s> The shape of the discipline <s> Combined analysis of co‐citation relations and words is explored to study time‐dependent (“dynamical”) aspects of scientific activities, as expressed in research publications. This approach, using words originating from publications citing documents in co‐citation clusters, offers an additional and complementary possibility to identify and link specialty literature through time, compared to the exclusive use of citations. Analysis of co‐citation relations is used to locate and link groups of publications that share a consensus concerning intellectual base literature. Analysis of word‐profile similarity is used to identify and link publication groups that belong to the same subject‐matter research specialty. Different types of “content‐words” are analyzed, including indexing terms, classification codes, and words from title and abstract of publications. The developed methods and techniques are illustrated using data of a specialty in atomic and molecular physics. For this specialty, it is shown that, over a period of 10 years, continuity in intellectual base was at a lower level than continuity in topics of current research. This finding indicates that a series of interesting new contributions are made in course of time, without vast alteration in general topics of research. However, within this framework, a more detailed analysis based on timeplots of individual cited key‐articles and of content‐words reveals a change from more rapid succession of new empirical studies to more retrospective, and theoretically oriented studies in later years. © 1991 John Wiley & Sons, Inc. <s> BIB003 </s> A Review of Theory and Practice in Scientometrics <s> The shape of the discipline <s> Observing is a paradoxical operation: a duality as unity, and a distinction between distinguishing and indicating, that is, a distinction that is repeated in itself. One can speak of scientific observation only if such an operation of distinguishing- indication is achieved through concepts. If one observes observation one cannot avoid observing the paradox. When a second-order observer wants to know how the observed observer observes, it has to observe how the observed observer deals with its own paradox, how it de-paradoxizes the paradox. Even scientific communication is an actualization of the paradox of observation, and therefore it is in principle incapable of dealing with logic. A theory of scientific observation should then be concerned with how science has nevertheless managed. The point comes to be: who observes with the aid of the concept of communication, and how does it observe? <s> BIB004 </s> A Review of Theory and Practice in Scientometrics <s> The shape of the discipline <s> 'No reality please. We're economists'. There is a wide spread belief that modern economics is irrelevant to the understanding of the real world. In a controversial and original study, Tony Lawson argues that the root of this irrelevance is in the failure of economists to find methods and tools which are appropriate for the social world it addresses. Supporting his argument with a wide range of examples, Tony Lawson offers a provocative account of why economics has gone wrong and how it can be put back on track. <s> BIB005 </s> A Review of Theory and Practice in Scientometrics <s> The shape of the discipline <s> Citations support the communication of specialist knowledge by allowing authors and readers to make specific selections in several contexts at the same time. In the interactions between the social network of (first-order) authors and the network of their reflexive (that is, second-order) communications, a sub-textual code of communication with a distributed character has emerged. The recursive operation of this dual-layered network induces the perception of a cognitive dimension in scientific communication.Citation analysis reflects on citation practices. Reference lists are aggregated in scientometric analysis using one (or sometimes two) of the available contexts to reduce the complexity: geometrical representations (‘mappings’) of dynamic operations are reflected in corresponding theories of citation. For example, a sociological interpretation of citations can be distinguished from an information-theoretical one. The specific contexts represented in the modern citation can be deconstructed from the perspective of the cultural evolution of scientific communication. <s> BIB006 </s> A Review of Theory and Practice in Scientometrics <s> The shape of the discipline <s> The theory of autopoiesis, that is systems that are self-producing or self-constructing, was originally developed to explain the particular nature of living as opposed to non-living entities. It was subsequently enlarged to encompass cognition and language leading to what is known as second-order cybernetics. However, as with earlier biological theories, many authors have tried to extend the domain of the theory to encompass social systems, the most notable being Luhmann. The purpose of this article is to consider critically the extent to which the theory of autopoiesis, as originally defined, can be applied to social systems - that is, whether social systems are autopoietic. And, if it cannot, whether some weaker version might be appropriate. <s> BIB007 </s> A Review of Theory and Practice in Scientometrics <s> The shape of the discipline <s> Because of the widespread use of citations in evaluation, we tend to think of them primarily as a form of colleague recognition. This interpretation neglects rhetorical factors that shape patterns of citations. After reviewing sociological theories of citation, this paper argues that we should think of citations first as rhetoric and second as reward. Some implications of this view for quantitative modeling of the citation process are drawn. <s> BIB008 </s> A Review of Theory and Practice in Scientometrics <s> The shape of the discipline <s> Since theScience Citation Index emerged within the system of scientific communication in 1964, an intense controversy about its character has been raging: in what sense can citation analysis be trusted? This debate can be characterized as the confrontation of different perspectives on science. In this paper the citation representation of science is discussed: the way the citation creates a new reality of as well as in the world of science; the main features of this reality; and some implications for science and science policy. <s> BIB009 </s> A Review of Theory and Practice in Scientometrics <s> The shape of the discipline <s> Abstract The paper discusses the often lamented lack of a theory of citations, and the lack of a sociological theory in particular. It draws attention to one proposed theory and discusses the potential reasons why it has not been generally accepted as the theory of citations, despite its merits in explaining many phenomena in the citation behaviour of scientists. This theory has been expounded by Latour and presented, in particular, in his book entitledScience in Action. <s> BIB010
|
Citations refer to texts other than the one that contains the cited references, and thus induce a dynamic vision of the sciences developing as networks of relations . In the scientometric literature, this has led to the call for "a theory of citation" BIB008 BIB001 BIB006 . BIB009 citation index inverts the directionality and studies "citedness" as a measure of impact. From the perspective of STS, the citation index thus would generate a semiotic artifact BIB010 . References can have different functions in texts, such as legitimating research agendas, warranting knowledge claims, black-boxing discussions, or be perfunctory. In and among texts, references can also be compared with the co-occurrences and co-absences of words in a network model of science BIB002 BIB003 A network theory of science was formulated by Hesse (1980, p. 83) as "an account that was first explicit in Duhem and more recently reinforced in Quine. Neither in Duhem nor in Quine, however, is it quite clear that the netlike interrelations between more observable predicates and their laws are in principle just as subject to modifications from the rest of the network as are those that are relatively theoretical." A network can be visualized, but can also be formalized as a matrix. The eigenvectors of the matrix span the latent dimensions of the network. There is thus a bifurcation within the discipline of scientometrics. On the one hand, and by far the dominant partner, we have the relatively positivistic, quantitative analysis of citations as they have happened, after the fact so to speak. And on the other, we have the sociological, and often constructivist theorising about the generation of citations -a theory of citing behaviour. Clearly the two sides are, and need to be linked. The citing behaviour, as a set of generative mechanisms , produces the citation events but, at the same time, analyses of the patterns of citations as "demi-regularities" BIB005 can provide insights into the processes of scientific communication which can stimulate or validate theories of behaviour. Another interesting approach is to consider the overall process as a social communication system. One could use BIB004 theory of society as being based on autopoietic communication BIB007 . Different functional subsystems within society, e.g., science, generate their own organizationally closed networks of recursive communications. A communicative event consists of a unity of information, utterance and understanding between senders and receivers. Within the scientometrics context, the paper, its content and its publication would be the information and utterance, and the future references to it in other papers would be the understanding that it generates. Such communication systems operate at their own emergent level distinct from the individual scientists who underlie them, and generate their own cognitive distinctions that can be revealed the visualisation procedures discussed above.
|
A Review of Methods and Algorithms for Optimizing Construction Scheduling <s> Introduction <s> This article describes an algorithm for efficiently shortening the duration of a project when the expected project duration exceeds a predetermined limit. The problem consists of determining which activities to expedite and by what amount. The objective is to minimize the cost of the project. ::: ::: This algorithm is considerably less complex than the analytic methods currently available. Because of its inherent simplicity, the algorithm is ideally suited for hand computation and also is suitable for computer solution. Solutions derived by the algorithm were compared with linear programming results. These comparisons revealed that the algorithm solutions are either a equally good or b nearly the same as the solutions obtained by more complex analytic methods which require a computer. ::: ::: With this method the CPM time-cost tradeoff problem is solved without access to a computer, thereby making this planning tool available to managers who otherwise would find implementation impractical. <s> BIB001 </s> A Review of Methods and Algorithms for Optimizing Construction Scheduling <s> Introduction <s> Introduction to Scheduling Task Definition: The Foundation of A Schedule Gantt Charts Logic Diagrams and Scheduling The Critical Path Method (CPM) Calculation of CPM Event and Task Times The Precedence Method (PM) Calculation of Precedence Method Task Times and Floats Program Evaluation Review Technique (PERT) Probabilistic Scheduling (Monte Carlo Method) Calendar Day Scheduling Updating the Schedule Expediting the Project Resource-Constrained Scheduling and Resource Leveling Cash Flow Analysis Based on Construction Schedules Applications of Computers to Scheduling The Scientific Cost Estimating Method Computer Programs for Scheduling Table of Random Numbers Index. <s> BIB002 </s> A Review of Methods and Algorithms for Optimizing Construction Scheduling <s> Introduction <s> One of the most important functions of planning is to offset uncertainty and change. However, projects are often affected by external factors or constraints that can either facilitate progress or create delays in the project. Sometimes, logic changes can be inevitable. Therefore, special techniques are needed to provide a simple way of network updating in order to reflect the impact of logic change on project completion date and on the critical path. This paper addresses the problem of soft logic and discusses logic changes during the course of the work. An algorithmic procedure has been developed to handle the soft logic in network analysis. SOFTCPM is a microcomputer program created by the writers that deals with the soft logic in CPM networks. It has the capability of updating the CPM network logic when any unexpected event occurs that prevents working according to the scheduled activity sequence. <s> BIB003 </s> A Review of Methods and Algorithms for Optimizing Construction Scheduling <s> Introduction <s> This paper concerns resources scheduling using a simple heuristic model described as the “current float” model. Current sfloat is defined as the finish float available with respect to its latest finish time in the original network computations. The current float model allocates limited resources by giving priority to the activity that has the least current float. The current floats need be computed only for those activities that are engaged in a resource conflict. The “total float” models that were used previously required the tedious task of constructing and computing a status network every time an activity is postponed due to nonavailability of resources. The current float model avoids this and requires only the original network computations. The mathematical validity of the model is explained, and the paper presents proof that the output of this model is the same as that of the total float model. The physical significance of the model is also indicated. An application section is included to illustrate ... <s> BIB004 </s> A Review of Methods and Algorithms for Optimizing Construction Scheduling <s> Introduction <s> A survey of project scheduling problems since 1973 limited to work done specifically in the project scheduling area (although several techniques developed for assembly line balancing and job‐shop scheduling can be applicable to project scheduling): the survey includes the work done on fundamental problems such as the resource‐constrained project scheduling problem (RCPSP); time/cost trade‐off problem (TCTP); and payment scheduling problem (PSP). Also discusses some recent research that integrates RCPSP with either TCTP or PSP, and PSP with TCTP. In spite of their practical relevance, very little work has been done on these combined problems to date. The future of the project scheduling literature appears to be developing in the direction of combining the fundamental problems and developing efficient exact and heuristic methods for the resulting problems. <s> BIB005 </s> A Review of Methods and Algorithms for Optimizing Construction Scheduling <s> Introduction <s> This paper focuses on the multi-objective deterministic and stochastic modelling and optimization of construction tasks scheduling. Of particular importance is a frequency need to satisfy conflicting optimization objectives such as the minimization of extra task performance cost and total performance time. Technological relationships between relevant construction tasks have been modelled using the deterministic and probabilistic precedence networks, while the construction resources have been modelled deterministically and stochastically using duration/productivity and resource/performance cost and performance time matrices. For the solution of the problem a mixed integer programming approach has been adopted, utilizing a computer algorithm containing elements heuristic methods. <s> BIB006 </s> A Review of Methods and Algorithms for Optimizing Construction Scheduling <s> Introduction <s> In this paper, research on the resource-constrained project scheduling problem is classified according to specified objectives and constraints. Each classified area is extensively surveyed, and special emphasis is given to trends in recent research. Specific papers involving nonrenewable resource constraints and time/cost-based objectives are discussed in detail because they present models that are close representations of real-world problems. The difficulty of solving such complex models by optimization techniques is noted. For the purposes of this survey, a set of 78 optimally solved test problems from the literature and a second set of 110 benchmark problems have been subjected to analysis with some well-known dispatching rules and a scheduling algorithm that consists of a decision-making process utilizing the problem constraints as a base of selection. The computational results are reported and discussed in the text. Constructive scheduling algorithms that are directly based on the problem constraints... <s> BIB007 </s> A Review of Methods and Algorithms for Optimizing Construction Scheduling <s> Introduction <s> A general mathematical formulation is presented for scheduling of construction projects and applied to the problem of highway construction scheduling. Repetitive and nonrepetitive tasks, work continuity constraints, multiple-crew strategies, and the effects of varying job conditions on the performance of a crew can be modeled. An optimization formulation is presented for the construction project scheduling problem with the goal of minimizing the direct construction cost. The nonlinear optimization is then solved by the neural dynamics model developed recently by Adeli and Park. For any given construction duration, the model yields at the optimum construction schedule for minimum construction cost automatically. By varying the construction duration, one can solve the cost-duration trade-off problem and obtain the global optimum schedule and the corresponding minimum construction cost. The new construction scheduling model provides the capabilities of both the CPM and LSM approaches. In addition, it provides features desirable for repetitive projects such as highway construction and allows schedulers greater flexibility. It is particularly suitable for studying the effects of change order on the construction cost. This research provides the mathematical foundation for development of a new generation of more general, flexible, and accurate construction scheduling systems. <s> BIB008 </s> A Review of Methods and Algorithms for Optimizing Construction Scheduling <s> Introduction <s> Abstract Research into IT applications in the construction industry has been going on for many years, most of this work took the form of system development aimed at assisting construction practitioners and aimed at improving processes in order to reduce the cost of building. Most of these developments tended to identify a problem in a sector of theindustry and focused on using a certain technology in IT to provide a solution. This was often done without a proper investigation into the suitability and the acceptability of the technology to the end users (construction practitioners). Furthermore, most of the work was too focused on solving problems in isolation and did not consider the overall organisational framework and structure of the industry. This paper discusses and presents the results of a survey conducted to investigate the planning and estimating work practices in the industry in order to establish the important issues for the development of an integrated planning and estimating computer model. The survey established the important issues for the acceptability of computer models, the technical aspect to be addressed and a better working practice for estimating and planning. The technical aspect on which the computer model was based is the optimisation of the time and cost of building and the best work practice used is the integration of estimating and planning. <s> BIB009 </s> A Review of Methods and Algorithms for Optimizing Construction Scheduling <s> Introduction <s> This paper proposes an integrated methodology for the management of (availability) constrained resources within a concurrent project management environment. The methodology incorporates six main components the projects interface management module, the impact matrix module, the projects grouping module, the projects prioritization module, the master projects scheduling module, and the resources assignment module all of which are integrated coherently, within the framework of a decision support system. The component which is delineated in this paper is the resources assignment module; the other modules are respectively the subject of future papers. The essence of such an integrated and coherent methodology is to enhance the centralization and efficient distribution of information on various concurrent projects parameters, such as projects priorities, projects completion times, their associated costs, and resources allocation patterns, etc. The proposed methodogy is applied in an actual industrial case study... <s> BIB010 </s> A Review of Methods and Algorithms for Optimizing Construction Scheduling <s> Introduction <s> In the management of a construction project, the project duration can often be compressed by accelerating some of its activities at an additional expense. This is the so-called time-cost trade-off ... <s> BIB011 </s> A Review of Methods and Algorithms for Optimizing Construction Scheduling <s> Introduction <s> Optimization problems in construction scheduling, such as time-cost optimization, can be effectively solved using genetic algorithms (GAs). This paper presents an approach that makes GA-based time-cost optimization viable for real world problems. Practicability is incorporated through the integration of a project management system to the GA system. The approach takes advantage of the powerful scheduling functionality of the project management system in evaluating project completion dates during optimization. The approach ensures that all scheduling parameters, including activity relationships, lags, calendars, constraints, resources, and progress, are considered in determining the project completion date, thus allowing comprehensive and realistic evaluations to be made during optimization. <s> BIB012 </s> A Review of Methods and Algorithms for Optimizing Construction Scheduling <s> Introduction <s> The difficulties encountered in scheduling construction projects with resource constraints are highlighted by means of a simplified bridge construction problem. A genetic algorithm applicable to projects with or without resource constraints is described. In this application, chromosomes are formed by genes consisting of the start days of the activities. This choice necessitated introducing two mathematical operators (datum operator and left compression operator) and emphasizing one genetic operator (fine mutation operator). A generalized evaluation of the fitness function is conducted. The algorithm is applied to the example problem. The results and the effects of some of the parameters are discussed.Key words: scheduling, genetic algorithms, construction management, computer application. <s> BIB013 </s> A Review of Methods and Algorithms for Optimizing Construction Scheduling <s> Introduction <s> 2 Abstract: Construction planning and control are identified among the top potential areas needing improvements. A traditional technique known as the Critical Path Method (CPM) has been widely criticised in terms of its inability to cope with non- precedence constraints, difficulty to evaluate and communicate interdependencies, and inadequacy for work-face productions. Attempting to treat these deficiencies, substantial research efforts have resulted in a wide range of advancements including design of new planning and control methodologies and development of sophisticated computerised applications. However, these efforts have not effectively overcome all of the above CPM drawbacks and, therefore, have not yet provided a solution to the industry. <s> BIB014 </s> A Review of Methods and Algorithms for Optimizing Construction Scheduling <s> Introduction <s> SUMMARY Reliable construction schedule is vital for effective co-ordination across supply chains and various trades at construction work face. According to the lean construction concept, reliability of the schedule can be enhanced through detection and satisfaction of all potential constraints prior to releasing operation assignments. However, it is difficult to implement this concept since current scheduling tools and techniques are fragmented and designed to deal with a limited set of construction constraints. This paper introduces a methodology termed ‘multi-constraint scheduling’ in which four major groups of construction constraints including physical, contract, resource, and information constraints are considered. A Genetic Algorithm (GA) has been developed and used for multi-constraint optimisation problem. Given multiple constraints such as activity dependency, limited working area, and resource and information readiness, the GA alters tasks’ priorities and construction methods so as to arrive at optimum or near optimum set of project duration, cost, and smooth resource profiles. This feature has been practically developed as an embedded macro in MS Project. Several experiments confirmed that GA can provide near optimum solutions within acceptable searching time (i.e. 5 minutes for 1.92E11 alternatives). Possible improvements to this research are further suggested in the paper. <s> BIB015 </s> A Review of Methods and Algorithms for Optimizing Construction Scheduling <s> Introduction <s> Many departments of transportation have recently started to utilize innovative contracting methods that provide new incentives for improving construction quality. These emerging contracts place an increasing pressure on decision makers in the construction industry to search for an optimal resource utilization plan that minimizes construction cost and time while maximizing its quality. This paper presents a multiobjective optimization model that supports decision makers in performing this challenging task. The model is designed to transform the traditional two-dimensional time-cost tradeoff analysis to an advanced three-dimensional time-cost-quality trade-off analysis. The model is developed as a multiobjective genetic algorithm to provide the capability of quantifying and considering quality in construction optimization. An application example is analyzed to illustrate the use of the model and demonstrate its capabilities in generating and visualizing optimal tradeoffs among construction time, cost, and quality. <s> BIB016 </s> A Review of Methods and Algorithms for Optimizing Construction Scheduling <s> Introduction <s> This paper attempts to use evolutionary algorithms to solve the problem of minimizing construction project duration in deterministic conditions, with in-time changeable and limited accessibility of renewable resources (workforce, machines, and equipment). Particular construction processes (with various levels of complexity) must be conducted in the established technological order and can be executed with different technological and organizational variants (different contractors, technologies, and ways of using resources). Such a description of realization conditions allows the method to also be applied to solving more complex problems that occur in construction practice (e.g., scheduling resources for a whole company, not only for a single project). The method's versatility distinguishes it from other approaches presented in numerous publications. To assess the solutions generated by the evolutionary algorithm, the writers worked a heuristic algorithm (for the allocation of resources and the calculation of the shortest project duration). The results obtained by means of this methodology seem to be similar to outcomes of other comparable methodologies. The proposed methodology (the model and the computer system) may be of great significance to the construction industry. The paper contains some examples of the practical use of the evolutionary algorithm for project planning with time constraints. <s> BIB017 </s> A Review of Methods and Algorithms for Optimizing Construction Scheduling <s> Introduction <s> This paper introduces a methodology for solving the multimode resource-constrained project scheduling problem (MRCPSP) based on particle swarm optimization (PSO). The MRCPSP considers both renewable and nonrenewable resources that have not been addressed efficiently in the construction field. The framework of the PSO-based methodology is developed with the objective of minimizing project duration. A particle representation formulation is proposed to represent the potential solution to the MRCPSP in terms of priority combination and mode combination for activities. Each particle-represented solution should be checked against the nonrenewable resource infeasibility and will be handled by adjusting the mode combination. The feasible particle-represented solution is transformed to a schedule through a serial generation scheme. Experimental analyses are presented to investigate the performance of the proposed methodology. Comparisons with other methods show that the PSO method is equally efficient at solving the MRCPSP. <s> BIB018 </s> A Review of Methods and Algorithms for Optimizing Construction Scheduling <s> Introduction <s> An alternative heuristic method for scheduling repetitive projects in which resources are limited and activities may be executed with multiple modes of resource demands associated with different durations is proposed. Unlike general heuristic methods that separately analyze each competing activity and schedule only one at a time, the proposed heuristic algorithm ranks possible combinations of activities every time and simultaneously schedules all activities in the selected combination leading to minimal project duration. All alternative combinations of activities in consideration of resource constraints, multiple modes and characteristics of the repetitive projects are determined through a permutation tree-based procedure. The heuristic method is implemented based on the corresponding framework. An example is presented to demonstrate the efficiency of the proposed heuristic method. The study is expected to provide an efficient heuristic methodology for solving the project scheduling problem. <s> BIB019 </s> A Review of Methods and Algorithms for Optimizing Construction Scheduling <s> Introduction <s> Evolutionary algorithms, a form of meta-heuristic, have been successfully applied to a number of classes of complex combinatorial problems such as the well-studied travelling salesman problem, bin packing problems, etc. They have provided a method other than an exact solution that will, within a reasonable execution time, provide either optimal or near optimal results. In many cases near optimal results are acceptable and the additional resources that may be required to provide exact optimal results prove uneconomical. The class of project scheduling problems (PSP) exhibit a similar type of complexity to the previous mentioned problems, also being NP-hard, and therefore would benefit from solution via meta-heuristic rather than exhaustive search. Improvement to a project schedule in terms of total duration or resource utilisation can be of major financial advantage and therefore near optimal solution via evolutionary techniques should be considered highly applicable. In preparation for further research th... <s> BIB020 </s> A Review of Methods and Algorithms for Optimizing Construction Scheduling <s> Introduction <s> Time and cost are the most important factors to be considered in every construction project. In order to maximize the return, both the client and contractor would strive to optimize the project duration and cost concurrently. Over the years, many research studies have been conducted to model the time–cost relationships, and the modeling techniques range from the heuristic methods and mathematical approaches to genetic algorithms. Despite that, previous studies often assumed the time being constant leaving the analyses based purely on a single objective—cost. Acknowledging the significance of time–cost optimization, an evolutionary-based optimization algorithm known as ant colony optimization is applied to solve the multiobjective time–cost optimization problems. In this paper, the basic mechanism of the proposed model is unveiled. Having developed a program in the Visual Basic platform, tests are conducted to compare the performance of the proposed model against other analytical methods previously used fo... <s> BIB021 </s> A Review of Methods and Algorithms for Optimizing Construction Scheduling <s> Introduction <s> Abstract This paper deals with construction project scheduling. In the literature on the subject one can find such scheduling methods as: the Linear Scheduling Model (LSM), Line of Balance (LOB) charts and CMP/PERT network planning. The methods take into account several objective functions: the least cost, the least time, limited resources, work priorities, etc., both in the deterministic and probabilistic approach. The paper presents an analysis of the time/cost relationship, performed using time coupling method TCM III. A modified hybrid evolutionary algorithm (HEA) developed by Bozejko and Wodecki (A Hybrid Evolutionary Algorithm for Some Discrete Optimization Problems. IEEE Computer Society, 325–331, 2005) was used for optimization. <s> BIB022 </s> A Review of Methods and Algorithms for Optimizing Construction Scheduling <s> Introduction <s> The resource-constrained project scheduling problem (RCPSP) has received the attention of many researchers because its general model can be used in a wide variety of construction planning and scheduling applications. The exact procedures and priority-rule-based heuristics fail to search for the optimum solution to the RCPSP of large-sized project networks in a reasonable amount of time for successful application in practice. This paper presents a permutation-based elitist genetic algorithm for solving the problem in order to fulfill the lack of an efficient optimal solution algorithm for project networks with 60 activities or more as well as to overcome the drawback of the exact solution approaches for large-sized project networks. The proposed algorithm employs the elitist strategy to preserve the best individual solution for the next generation so the improved solution can be obtained. A random number generator that provides and examines precedence feasible individuals is developed. A serial schedule generation scheme for the permutation-based decoding is applied to generate a feasible solution to the problem. Computational experiments using a set of standard test problems are presented to demonstrate the performance and accuracy of the proposed algorithm. <s> BIB023 </s> A Review of Methods and Algorithms for Optimizing Construction Scheduling <s> Introduction <s> The project scheduling problem involves the scheduling of project activities subject to precedence and/or resource constraints. Of obvious practical importance, it has been the subject of intensive research since the late fifties. A wide variety of commercialized project management software packages have been put to practical use. Despite all these efforts, numerous reports reveal that many projects escalate in time and budget and that many project scheduling procedures have not yet found their way to practical use. The objective of this paper is to confront project scheduling theory with project scheduling practice. We provide a generic hierarchical project planning and control framework that serves to position the various project planning procedures and discuss important research opportunities, the exploration of which may help to close the theory-practice gap. <s> BIB024 </s> A Review of Methods and Algorithms for Optimizing Construction Scheduling <s> Introduction <s> The paper presents a computational method to help in automating the generation of time schedules for bridge construction projects. The method is based on the simulation of the construction works, taking into account the available resources and the interdependencies between the individual tasks. The simulation is realized by means of the discrete-event based simulation software originally created for plant layout in the manufacturing industry. Since the fixed process chains provided there are too rigid to model the more spontaneous task sequences of construction projects, a constraint module that selects the next task dynamically has been incorporated. The input data of the constraint module is formed by work packages of atomic activities. The description of a work package comprises the building element affected, the required material, machine and manpower resources, as well as the technological pre-requisites of the task to be performed. These input data are created with the help of a 3D model-based application that enables to assign process patterns to individual building elements. A process pattern consists of a sequence of work packages for realizing standard bridge parts, thus describing a construction method which in turn represents a higher level of abstraction in the scheduling process. In the last step, the user specifies the available resources. The system uses all the given information to automatically create a proposal for the construction schedule, which may then be refined using standard scheduling software. <s> BIB025 </s> A Review of Methods and Algorithms for Optimizing Construction Scheduling <s> Introduction <s> In construction scheduling, problems can arise when each activity could start at different time points and the resources needed by the activities are limited. Moreover, activities have required conditions to be met, such as precedence relationships, resource requirements, etc. To resolve these problems, a two-phase GA (genetic algorithm) model is proposed in this paper, in which both the effects of time-cost trade-off and resource scheduling are taken into account. A GA-based time-cost trade-off analysis is adopted to select the execution mode of each activity through the balance of time and cost, followed by utilization of a GA-based resource scheduling method to generate a feasible schedule which may satisfy all the project constraints. Finally, the model is demonstrated using an example project and a real project. <s> BIB026 </s> A Review of Methods and Algorithms for Optimizing Construction Scheduling <s> Introduction <s> The resource-constrained project scheduling problem (RCPSP) consists of activities that must be scheduled subject to precedence and resource constraints such that the makespan is minimized. It has become a well-known standard problem in the context of project scheduling which has attracted numerous researchers who developed both exact and heuristic scheduling procedures. However, it is a rather basic model with assumptions that are too restrictive for many practical applications. Consequently, various extensions of the basic RCPSP have been developed. This paper gives an overview over these extensions. The extensions are classified according to the structure of the RCPSP. We summarize generalizations of the activity concept, of the precedence relations and of the resource constraints. Alternative objectives and approaches for scheduling multiple projects are discussed as well. In addition to popular variants and extensions such as multiple modes, minimal and maximal time lags, and net present value-based objectives, the paper also provides a survey of many less known concepts. <s> BIB027 </s> A Review of Methods and Algorithms for Optimizing Construction Scheduling <s> Introduction <s> In this paper, a computational approach based on a new exact penalty ::: function method is devised for solving a class of continuous ::: inequality constrained optimization problems. The continuous ::: inequality constraints are first approximated by smooth function in ::: integral form. Then, we construct a new exact penalty function, ::: where the summation of all these approximate smooth functions in ::: integral form, called the constraint violation, is appended to the ::: objective function. In this way, we obtain a sequence of approximate ::: unconstrained optimization problems. It is shown that if the value ::: of the penalty parameter is sufficiently large, then any local ::: minimizer of the corresponding unconstrained optimization problem is ::: a local minimizer of the original problem. For illustration, three ::: examples are solved using the proposed method. From the solutions ::: obtained, we observe that the values of their objective functions ::: are amongst the smallest when compared with those obtained by other ::: existing methods available in the literature. More importantly, our ::: method finds solution which satisfies the continuous inequality ::: constraints. <s> BIB028 </s> A Review of Methods and Algorithms for Optimizing Construction Scheduling <s> Introduction <s> In this paper, the intelligent optimization methods including genetic algorithm (GA), particle swarm optimization (PSO) and modified particle swarm optimization (MPSO) are used in optimizing the project scheduling of the first mining face of the second region of the fifth Ping'an coal mine in China. The result of optimization provides essential information of management and decision-making for governors and builder. The process of optimization contains two parts: the first part is obtaining the time parameters of each process and the network graph of the first mining face in the second region by PERT (program evaluation and review technique) method based on the raw data. The other part is the second optimization to maximal NPV (net present value) based on the network graph. The starting dates of all processes are decision-making variables. The process order and time are the constraints. The optimization result shows that MPSO is better than GA and PSO and the optimized NPV is 14,974,000 RMB more than the original plan. <s> BIB029 </s> A Review of Methods and Algorithms for Optimizing Construction Scheduling <s> Introduction <s> This study investigates cash flow for profit optimization and handles scheduling problems in multiproject environment. By identifying the amount and timing of individual inflow or outflow at the end of each period, contractors can observe the cash flow at specific time points according to project progress. Since most companies handle multiple projects simultaneously, managing project finance becomes complicated and tough for contractors. Therefore, this study considers cash flow and the financial requirements of contractors working in a multiple-project environment and proposes a profit optimization model for multiproject scheduling problems using constraint programming. The current study also presents a hypothetical example involving three projects to illustrate capability of the proposed model and adopts various constraints, including credit limit (CL) and due dates, for scenario analysis. The analysis result demonstrates that setting CLs ensures smooth financial pressure by properly shifting activities, and assigning due dates for projects helps planners avoid project duration extension while maximizing overall project profit. <s> BIB030 </s> A Review of Methods and Algorithms for Optimizing Construction Scheduling <s> Introduction <s> Conventional project scheduling is restricted to single-skilled resource assumption where each worker is assumed to have only one skill. This, in effect, contradicts real-world practice where workers may possess multiple skills and, on several occasions, are assigned to perform tasks for which they are not specialized. Past research has shown a simple process of heuristic approach for multi-skilled resource scheduling where a project is planned under the assumption that each resource can have more than one skill and resource substitution is allowed. Nevertheless, the approach has presented resource substitution step where an activity with higher priority can claim any resource regardless of its concurrent activities' resource requirements. Furthermore, the approach is subjected to all-or-nothing resource assignment concept where an activity cannot start and resources are not needed for that activity at all unless the required resources of that activity can be completely fulfilled. This research presents an alternative heuristic approach for multi-skilled resource scheduling in an attempt to improve the resource substitution approach. Augmented resource substitution rule and resource-driven task duration are presented to increase starting opportunity of activities on earlier time. Case studies are presented to illustrate the improved result of shorter project duration. <s> BIB031
|
Scheduling the execution processes for a construction project is a complex and challenging task ). The selection of resources (eg, labour, plant and equipment) is the most important part of scheduling and should be considered incongruence with site restrictions and the work to be undertaken BIB017 . As projects are unique in nature, the creation of a schedule for construction tasks by a planner, for example, should consider an array of conditions such as technological and organizational methods and constraints, as well as the availability of resource to ensure that a client's needs and requirements in terms of time, cost and quality are met BIB017 . Construction project scheduling has received a considerable amount of attention over the last 20 years (eg, BIB004 BIB008 BIB011 BIB018 . A plethora of methods and algorithms have been developed to address specific scenarios or problems, particularly significant practical issues such as: K scheduling with uncertain estimates on activity durations; K integrated planning and scheduling and resource allocation; and K scheduling in unstructured or poorly formulated circumstances. Fundamentally, the construction schedule optimization (CSO) problem is a subdivision of the project scheduling optimization problem. Many techniques and algorithms used for solving the project scheduling problem can be directly applied to the CSO problems. A detailed review of the project scheduling can be found in BIB002 , BIB005 , BIB007 , BIB024 , BIB020 and BIB027 . While, different projects have varying features, so does the CSO problem. Construction projects are unique in nature and each has their own site characteristics, weather condition, and crew of labour and fleet of equipment. As a result, it is difficult to accurately predict the exact duration of each activity. The CSO problem involves the scheduling of construction activities subjected to precedence and/or resource constraints. The aim of the CSO is to determine a feasible schedule of these activities to achieve certain predefined objective, for example, the shortest project duration, lowest cost or highest profit subject to the problem constraints. Contractors always strive to minimize the project duration so as to obtain an advantage during a bid's evaluation. For example, they may 'crash' a project's duration (ie, the shortest possible time for which an activity can be scheduled) by allocating more resources (if sufficient resources are available) to expedite construction activities. However, crashing a project's duration invariably increases the cost, as additional resources are required. This is due to the interdependency that exists between time and cost. For example, compressing a project's duration will lead to an increase in direct costs (plant and equipment, materials and labour cost) and a decrease in indirect costs (project overhead), and vice versa BIB006 BIB009 BIB010 . To be successful in a bid evaluation, cost is also a factor that should be considered by a planner. During the construction phase, the cost for material, plant and equipment and labour are classified as direct costs, while insurances and taxes, for example, are indirect costs. Typically clients, particularly developers, aim to minimize project cost and duration in order to reduce their cost of finance and maximize their return on capital. For contractors, minimizing cost increases their profit and an earlier project completion reduces the risk of inflation and labour shortage BIB021 . Project scheduling should therefore consider time and cost simultaneously as a 'trade-off' exists. Thus, the original single objective optimization problem (ie, optimal time or cost) is shifted to a bi-objective optimization problem (ie, optimal time-cost). The construction time-cost optimization problem has been examined extensively in the construction engineering and management literature (eg, BIB001 BIB003 BIB012 BIB022 BIB025 BIB028 BIB031 . 'Crash duration' is a commonly used method to expedite the construction process. If a client, for example, requires their project to be completed earlier, a contractor may provide additional resources to shorten the duration of designated activities. As previously noted, for this to occur, resources need to be readily available. In practice, this assumption is often deemed to be unrealistic, as construction projects are subjected to constraints that play a key role in determining their schedule, for example, activity dependency, limited working area, information availability BIB015 . Activity dependency, time, cost and resources are the constraints normally considered when scheduling under the auspices of traditional project management. Solutions to this optimal scheduling problem with or without consideration of these constraints vary BIB013 . Activity dependency or precedence relationship is the most basic constraint that exists in construction projects. In a construction process, an activity cannot start until all its precedence activities are completed. In addition, the start time of each activity cannot be later than its latest start time in order to finish the project within the demanded duration BIB029 . The working space is always limited in a construction project. A working area may be required by several different activities at the same time. Therefore, determining how to optimally manage a working area to facilitate activity scheduling will directly affect project performance. Such a situation is called a 'space-time' conflict problem between construction site activities . Resources are the most influential constraints in construction, as they determine the feasibility of a project schedule and whether it is optimal BIB026 ). Schedule reduction is heavily dependent on the availability of resources. Information constraints, which consist of drawings, specifications, safety and risk assessments, authorizations to work, also have significant impact on the construction scheduling problem. Information flow between activities, for example, has often been overlooked BIB014 . A detailed review of constraints influencing construction can be found in BIB014 , BIB019 , , BIB023 , BIB025 and BIB030 . Another 'trade-off' from 'crash' duration is its influence on project quality . Thus, the time-cost bi-objective optimization problem could be expanded into a time-cost-quality multi-objective optimization problem, that is, minimizing the construction time and cost while maximizing the quality. To involve quality as an additional objective requires the quality to be quantifiable. In doing so, the following major challenges arise BIB016 : K the difficulty in measuring and quantifying the impact of each resource utilization option on the quality of the activity being considered; and K the complexity of cumulating quality performance at the activity level to an overall quality at the project level. In addressing the above challenges, El-Rayes and Kandil (2005) proposed a quality objective function that consists of a number of measurable quality indicators for each activity. It also comprises two types of weights that are used to estimate the overall quality performance at the project level, that is, weight of quality indicator compared with other indicators in activity and weight of activity compared with other activities in the project BIB016 . Therefore, the traditional two-dimensional time-cost trade-off problem is transformed into a threedimensional time-cost-quality trade-off problem. The proposed method could provide useful information for the decision maker to make trade-off decisions especially in a high quality demanded environment. Research including quality as an additional objective has been limited to date.
|
A Review of Methods and Algorithms for Optimizing Construction Scheduling <s> Mathematical methods <s> This paper is concerned with establishing the mathematical basis of the Critical-Path Method---a new tool for planning, scheduling, and coordinating complex engineering-type projects. The essential ingredient of the technique is a mathematical model that incorporates sequence information, durations, and costs for each component of the project. It is a special parametric linear program that, via the primal-dual algorithm, may be solved efficiently by network flow methods. Analysis of the solutions of the model enables operating personnel to answer questions concerning labor needs, budget requirements, procurement and design limitations, the effects of delays, and communication difficulties. <s> BIB001 </s> A Review of Methods and Algorithms for Optimizing Construction Scheduling <s> Mathematical methods <s> This paper describes in part findings and conclusions of ASCE’s Task Committee on Management of Construction Projects. This paper presents definitions of “Professional Construction Management” and “Professional Construction Manager”, explains the reasoning behind them, then describes the responsibilities of the Professional Construction Manager and his requirements in the planning and execution phases of a project. Professional Construction Management differs from conventional design-construct and traditional separate contractor and designer approaches in that there are by definition three separate and distinct members of the team (owner, designer, and manager) and the Professional Construction Manager does not perform significant design or construction work with his own forces. Professional Construction Management is not necessarily better or worse than other methods of procuring constructed facilities. However, the three-party-team approach is certainly a viable alternative to more traditional methods in many applications as its increasing use will demonstrate. <s> BIB002 </s> A Review of Methods and Algorithms for Optimizing Construction Scheduling <s> Mathematical methods <s> One of the most important functions of planning is to offset uncertainty and change. However, projects are often affected by external factors or constraints that can either facilitate progress or create delays in the project. Sometimes, logic changes can be inevitable. Therefore, special techniques are needed to provide a simple way of network updating in order to reflect the impact of logic change on project completion date and on the critical path. This paper addresses the problem of soft logic and discusses logic changes during the course of the work. An algorithmic procedure has been developed to handle the soft logic in network analysis. SOFTCPM is a microcomputer program created by the writers that deals with the soft logic in CPM networks. It has the capability of updating the CPM network logic when any unexpected event occurs that prevents working according to the scheduled activity sequence. <s> BIB003 </s> A Review of Methods and Algorithms for Optimizing Construction Scheduling <s> Mathematical methods <s> Critical Path Method Procedures and Terminology. The Network Diagram and Utility Data. Network Calculations I: Critical Paths and Floats. Network Calculations II: Simple Compression. Network Calculations III: Complex Compression and Decompression. Network Calculations IV: Scheduling and Resource Leveling. Practical Planning with Critical Path Methods. Project Control with Critical Path Methods. Financial Planning and Cost Control. Evaluation of Work Changes and Delays. Attitudes, Responsibilities, and Duties. Computer-Aided CPM. Selection of Technique. Integrated Project Development and Management. CPM, a Systems Concept. Appendices. Index. <s> BIB004 </s> A Review of Methods and Algorithms for Optimizing Construction Scheduling <s> Mathematical methods <s> The capabilities of network diagramming techniques (NDT) are restricted by limitations inherent in their representation of schedule constraints (typically expressed through precedent relationships between activities). This assertion suggests the selection of a richer representation as a departure point for extending the utility of planning and scheduling techniques. The authors suggest that such a representation is provided by a system which employs a general model of a project's 'status' and which allows schedule constraints to be expressed as rules which refer to this status. This system offers the advantages of allowing the precedence of activities to be based on more than just the completion of other activities. It also provides an efficient knowledge-based approach to scheduling that can express the reasoning underlying scheduling actions which can be employed by future artificial intelligence (AI) planning. The authors discuss how constraints are represented and describes what types of constraints can be represented in both NDT and in the proposed system-called A Construction Planner (ACP). This is followed by a comparison of the schedule generation algorithms used in NDT and in ACP. <s> BIB005 </s> A Review of Methods and Algorithms for Optimizing Construction Scheduling <s> Mathematical methods <s> In this paper, a practical method is developed in an attempt to address the fundamental matters and limitations of existing methods for critical-path method (CPM) based resource scheduling, which are identified by reviewing the prior research in resource-constrained CPM scheduling and repetitive scheduling. The proposed method is called the resource-activity critical-path method (RACPM), in which (1) the dimension of resource in addition to activity and time is highlighted in project scheduling to seamlessly synchronize activity planning and resource planning; (2) the start/finish times and the floats are defined as resource-activity attributes based on the resource-technology combined precedence relationships; and (3) the resource critical issue that has long baffled the construction industry is clarified. The RACPM is applied to an example problem taken from the literature for illustrating the algorithm and comparing it with the existing method. A sample application of the proposed RACPM for planning a footbridge construction project is also given to demonstrate that practitioners can readily interpret and utilize a RACPM schedule by relating the RACPM to the classic CPM. The RACPM provides schedulers with a convenient vehicle for seamlessly integrating the technology/process perspective with the resource use perspective in construction planning. The effect on the project duration and activity floats of varied resource availability can be studied through running RACPM on different scenarios of resources. This potentially leads to an integrated scheduling and cost estimating process that will produce realistic schedules, estimates, and control budgets for construction. <s> BIB006
|
Critical path method. The critical path method (CPM) is a widely used project scheduling algorithm that was developed in the late 1950s BIB001 . It can be applied to any project with interdependent activities, such as construction, aerospace engineering, software development, industrial manufacturing. To date, CPM is the most commonly used scheduling tool in the construction industry. Fundamentally, however, CPM can only deal with optimization problems with a single objective. CPM is commonly used in conjunction with the Programme Evaluation and Review Technique (PERT). The conventional CPM was developed to analyse the project network logic diagram. The essential technique to implement CPM is to construct a model of the project that involves the following items : K a list of all activities within the project; K the duration of each activity; and K the precedence relationship between the activities. With the above information, CPM can be used to calculate the longest path (critical path) to complete the project, and the earliest and latest starting and finishing time of each activity without delaying the completion of the project. Activities on the critical path are termed as 'critical activities' and those not on the critical path are 'float activities'. Figure 1 provides an example of a CPM network diagram with seven activities on nodes. In Figure 1 , there is only one critical path, that is, S-A-B-C-E. Therefore, A, B and C are critical activities. Any delay to these activities will delay the entire project. D, F, G and H are float activities, which can be delayed without influencing the project's duration. Therefore, CPM can be used in this instance to determine the shortest possible time to complete the project. A detailed review of how CPM has been used in the construction industry can be found in . A major limitation of CPM schedules is their reliance on time and dependency constraints. In addressing this limitation, a two-stage approach to resource constraints representation has been developed. In the first stage, the precedence relationships are defined, while in the second stage resources are introduced in the scheduling using resource allocation or leveling algorithms BIB004 BIB003 . suggest that resources should be ignored during the first stage. However, BIB005 have argued that realistic construction activities cannot be developed without considering resources. Moreover, it is often difficult to determine the logic between technological and resources constraints BIB002 . To overcome such difficulties, BIB005 proposed an approach, called 'A Construction Planner' (ACP), which explicitly accounts for all constraints simultaneously including resource constraints using a single stage approach. The ACP provides a more robust model of planning that took advantage of advanced computer technologies BIB005 . BIB006 developed a method to accommodate resource constraints and repetitive scheduling known as the Resource-Activity Critical-Path Method. On the basis of the resource-technology combined precedence relationships, the start/finish times and the floats are defined as resource-activity attributes. However, minimization of the overall project cost was not considered in this approach. In addressing this issue, present a scheduling method for determining the critical path in linear projects, which takes into account maximum time and distance constraints in addition to the commonly used minimum time and distance constraints. The proposed method incorporates the maximum constraints into the schedule and all linear activities are grouped into four categories according to their critical status and the abilities to influence the project duration. In this method, the production rates are assumed to be fixed and are unable to deal with uncertain resource availabilities. Integer programming (IP), linear programming (LP) and IP/LP algorithms A number of analytical algorithms have been applied to address the CSO problems, such as IP, LP and hybrid IP/LP algorithm. LP is a mathematical method for solving the optimization problem with linear objective functions subject to linear equality and inequality constraints . LP problem can be expressed in the following general form:
|
A Review of Methods and Algorithms for Optimizing Construction Scheduling <s> Subject to <s> Existing dynamic programming formulations are capable of identifying, from a set of possible alternatives, the optimum crew size for each activity in a repetitive project. The optimization criterion of these formulations is, however, limited to the minimization of the overall duration of the project. While this may lead to the minimization of the indirect cost of the project, it does not guarantee its overall minimum cost. The objective of this paper is to present a model that incorporates cost as an important decision variable in the optimization process. The model utilizes dynamic programming and performs the solution in two stages: first a forward process to identify local minimum conditions, and then a backward process to ensure an overall minimum state. In the first stage, a process similar to that use in time-cost trade-off analysis is employed, and a simple scanning and selecting process is used in the second stage. An example project from the literature is analyzed in order to demonstrate the use of the model and its validity, and illustrate the significance of cost as a decision variable in the optimization process. <s> BIB001 </s> A Review of Methods and Algorithms for Optimizing Construction Scheduling <s> Subject to <s> Construction planners must select appropriate resources, including crew size, equipment, methods, and technologies, to perform the tasks of a construction project. In general, there is a trade-off between time and cost to complete a task—the less expensive the resources, the longer it takes. Using critical-path-method techniques, the overall project cost can be reduced by using less expensive resources for noncritical activities without impacting the duration. Furthermore planners usually need to adjust the selection of resources in order to shorten or lengthen the project duration. Finding optimal decisions is difficult and time-consuming considering the numbers of permutations involved. For example, a critical-path-method network with only eight activities, each with two options, will have 256 (2\u8) alternatives. Exhaustive enumeration is not economically feasible even with very fast computers. This paper presents a new algorithm using linear and integer programming to efficiently obtain optimal resource selections that optimize time and cost of a construction project. <s> BIB002 </s> A Review of Methods and Algorithms for Optimizing Construction Scheduling <s> Subject to <s> Construction planners face the decisions of selecting appropriate resources, including crew sizes, equipment, methods and technologies, to perform the tasks of a construction project. In general, there is a trade-off between time and cost to complete a task - the less expensive the resources, the longer it takes. Using Critical Path Method (CPM) techniques, the overall project cost can be reduced by using less expensive resources for non-critical activities without impacting the duration. Furthermore, planners need to adjust the resource selections to shorten or lengthen the project duration. Finding the optimal decisions is difficult and time-consuming considering the numbers of permutations involved. For example, a CPM network with only eight activities, each with two options, will have 28 alternatives. For large problems, exhaustive enumeration is not economically feasible even with very fast computers. This paper presents a new algorithm using linear and integer programming to obtain optimal resource ... <s> BIB003 </s> A Review of Methods and Algorithms for Optimizing Construction Scheduling <s> Subject to <s> List of contributors 1. Simplex Algorithms 2. Interior Point Methods 3. A Computational View of Interior Point Methods 4. Interior Point Algorithms for Network Flow Problems 5. Branch and Cut Algorithms 6. Interior Point Algorithms for Integer Programming 7. Computational Logic and Integer Programming <s> BIB004 </s> A Review of Methods and Algorithms for Optimizing Construction Scheduling <s> Subject to <s> Since the early 1960s many techniques have been developed to plan and schedule linear construction projects. However, one, the critical path method (CPM), overshadowed the others. As a result, CPM developed into the powerful and effective tool that it is today. However, research has indicated that CPM is ineffective for linear construction. Linear construction projects are typified by activities that must be repeated in different locations such as highways, pipelines, and tunnels. Recently, there has been renewed interest in linear scheduling. Much of this interest has involved a technique called the linear scheduling method (LSM). Only recently has there been the ability to calculate the controlling activities of a linear schedule, independent of network analysis. Additional research needs to be done to develop some of the techniques available in CPM into comparable ones for linear scheduling. One of these techniques is resource leveling. This paper uses the vehicle of a highway construction project to present an integer linear programming formulation to level the resources of linear projects. <s> BIB005 </s> A Review of Methods and Algorithms for Optimizing Construction Scheduling <s> Subject to <s> Multiskilling is a workforce strategy that has been shown to reduce indirect labor costs, improve productivity, and reduce turnover. A multiskilled workforce is one in which the workers possess a range of skills that allow them to participate in more than one work process. In practice, they may work across craft boundaries. The success of multiskilling greatly relies on the foreman's ability to assign workers to appropriate tasks and to compose crews effectively. The foreman assigns tasks to workers according to their knowledge, capabilities, and experience on former projects. This research investigated the mechanics of allocating a multiskilled workforce and developed a linear programming model to help optimize the multiskilled workforce assignment and allocation process in a construction project, or between the projects of one company. It is concluded that the model will be most useful in conditions where full employment does not exist; however, it is also useful for short term allocation decisions. By running the model for various simulated scenarios, additional observations were made. For example, it is concluded that, for a capital project, the benefits of multiskilling are marginal beyond approximately a 20% concentration of multiskilled workers in a project workforce. Benefits to workers themselves become marginal after acquiring competency in two or three crafts. These observations have been confirmed by field experience. Extension of this model to allocation of multifunctional resources, such as construction equipment, should also be possible. <s> BIB006 </s> A Review of Methods and Algorithms for Optimizing Construction Scheduling <s> Subject to <s> Construction scheduling is the process of devising schemes for sequencing activities. A realistic schedule fulfills actual concerns of users, thus minimizing the chances of schedule failure. The minimization of total project duration has been the concept underlying critical-path method/program evaluation and review technique (CPM/PERT) schedules. Subsequently, techniques including resource management and time-cost trade-off analysis were developed to customize CPM/PERT schedules to fulfill users' concerns regarding project resources, cost, and time. However, financing construction activities throughout the course of the project is another crucial concern that must be properly treated otherwise, nonrealistic schedules are to be anticipated. Unless contractors manage to procure adequate cash to keep construction work running on schedule, the pace of work will definitely be relaxed. Therefore, always keeping scheduled activities in balance with available cash is a potential contribution to producing realistic schedules. An integer-programming finance-based scheduling method is offered to produce financially feasible schedules that balance financing requirements of activities at any period with cash available in that same period. The proposed method offers 2-fold benefits of minimizing total project duration and fulfilling finance availability constraints. <s> BIB007 </s> A Review of Methods and Algorithms for Optimizing Construction Scheduling <s> Subject to <s> New Heuristics and Adaptive Memory Procedures for Boolean Optimization Problems, Lars M. Hvattum, Arne Lokketangen, and Fred Glover Convergent Lagrangian Methods for Separable Nonlinear Integer Programming: Objective Level-Cut and Domain-Cut Methods, Duan Li, Xiaoling Sun, and Jun Wang The Generalized Assignment Problem, Robert M. Nauss Decomposition in Integer Linear Programming, Ted K. Ralphs and Matthew V. Galati Airline Scheduling Models and Solution Algorithms for the Temporary Closure of Airports, Shangyao Yan and Chung-Gee Lin Determining an Optimal Fleet Mix and Schedules: Part I - Single Source and Destination, Hanif D. Sherali and Salem M. Al-Yakoob Determining an Optimal Fleet Mix and Schedules: Part II - Multiple Sources and Destinations, and the Option of Leasing Transshipment Depots, Hanif D. Sherali and Salem M. Al-Yakoob An Integer Programming Model for the Optimization of Data Cycle Maps, David Panton, Maria John, and Andrew Mason Application of Column-Generation Techniques to Retail Assortment Planning, Govind P. Daruka and Udatta S. Palekar Noncommercial Software for Mixed-Integer Linear Programming, Jeff T. Linderoth and Ted K. Ralphs <s> BIB008 </s> A Review of Methods and Algorithms for Optimizing Construction Scheduling <s> Subject to <s> The Critical Path Method (CPM) and the Repetitive Scheduling Method (RSM) are the most often used tools for the planning, scheduling and control Linear Repetitive Projects (LRPs). CPM focuses mostly on project’s duration and critical activities, while RSM focuses on resource continuity. In this paper we present a linear programming approach to address the multi objective nature of decisions construction managers face in scheduling LRPs. The Multi Objective Linear Programming model (MOLP-LRP) is a parametric model that can optimize a schedule in terms of duration, work-breaks, unit completion time and respective costs, while at the same time the LP range sensitivity analysis can provide useful information regarding cost tradeoffs between delay, work-break and unit delivery costs. MOLPS-LRP can generate alternative schedules based on the relative magnitude and importance of different cost elements. In this sense it provides managers with the capability to consider alternative schedules besides those defined by minimum duration (CPM) or minimum resource work-breaks (RSM). Demonstrative results and analysis are provided through a well known in the literature case study example. <s> BIB009 </s> A Review of Methods and Algorithms for Optimizing Construction Scheduling <s> Subject to <s> Linear repetitive construction projects require large amounts of resources which are used in a sequential manner and therefore effective resource management is very important both in terms of project cost and duration. Existing methodologies such as the critical path method and the repetitive scheduling method optimize the schedule with respect to a single factor, to achieve minimum duration or minimize resource work breaks, respectively. However real life scheduling decisions are more complicated and project managers must make decisions that address the various cost elements in a holistic way. To respond to this need, new methodologies that can be applied through the use of decision support systems should be developed. This paper introduces a multiobjective linear programming model for scheduling linear repetitive projects, which takes into consideration cost elements regarding the project's duration, the idle time of resources, and the delivery time of the project's units. The proposed model can be used to generate alternative schedules based on the relative magnitude and importance of the different cost elements. In this sense, it provides managers with the capability to consider alternative schedules besides those defined by minimum duration or maximizing work continuity of resources. The application of the model to a well known example in the literature demonstrates its use in providing explicatory analysis of the results. <s> BIB010 </s> A Review of Methods and Algorithms for Optimizing Construction Scheduling <s> Subject to <s> PREFACE. PART I MODELING. 1 Introduction. 1.1 Integer Programming. 1.2 Standard Versus Nonstandard Forms. 1.3 Combinatorial Optimization Problems. 1.4 Successful Integer Programming Applications. 1.5 Text Organization and Chapter Preview. 1.6 Notes. 1.7 Exercises. 2 Modeling and Models. 2.1 Assumptions on Mixed Integer Programs. 2.2 Modeling Process. 2.3 Project Selection Problems. 2.4 Production Planning Problems. 2.5 Workforce/Staff Scheduling Problems. 2.6 Fixed-Charge Transportation and Distribution Problems. 2.7 Multicommodity Network Flow Problem. 2.8 Network Optimization Problems with Side Constraints. 2.9 Supply Chain Planning Problems. 2.10 Notes. 2.11 Exercises. 3 Transformation Using 0 1 Variables. 3.1 Transform Logical (Boolean) Expressions. 3.2 Transform Nonbinary to 0 1 Variable. 3.3 Transform Piecewise Linear Functions. 3.4 Transform 0 1 Polynomial Functions. 3.5 Transform Functions with Products of Binary and Continuous Variables: Bundle Pricing Problem. 3.6 Transform Nonsimultaneous Constraints. 3.7 Notes. 3.8 Exercises. 4 Better Formulation by Preprocessing. 4.1 Better Formulation. 4.2 Automatic Problem Preprocessing. 4.3 Tightening Bounds on Variables. 4.4 Preprocessing Pure 0 1 Integer Programs. 4.5 Decomposing a Problem into Independent Subproblems. 4.6 Scaling the Coefficient Matrix. 4.7 Notes. 4.8 Exercises. 5 Modeling Combinatorial Optimization Problems I. 5.1 Introduction. 5.2 Set Covering and Set Partitioning. 5.3 Matching Problem. 5.4 Cutting Stock Problem. 5.5 Comparisons for Above Problems. 5.6 Computational Complexity of COP. 5.7 Notes. 5.8 Exercises. 6 Modeling Combinatorial Optimization Problems II. 6.1 Importance of Traveling Salesman Problem. 6.2 Transformations to Traveling Salesman Problem. 6.3 Applications of TSP. 6.4 Formulating Asymmetric TSP. 6.5 Formulating Symmetric TSP. 6.6 Notes. 6.7 Exercises. PART II REVIEW OF LINEAR PROGRAMMING AND NETWORK FLOWS. 7 Linear Programming Fundamentals. 7.1 Review of Basic Linear Algebra. 7.2 Uses of Elementary Row Operations. 7.3 The Dual Linear Program. 7.4 Relationships Between Primal and Dual Solutions. 7.5 Notes. 7.6 Exercises. 8 Linear Programming: Geometric Concepts. 8.1 Geometric Solution. 8.2 Convex Sets. 8.3 Describing a Bounded Polyhedron. 8.4 Describing Unbounded Polyhedron. 8.5 Faces, Facets, and Dimension of a Polyhedron. 8.6 Describing a Polyhedron by Facets. 8.7 Correspondence Between Algebraic and Geometric Terms. 8.8 Notes. 8.9 Exercises. 9 Linear Programming: Solution Methods. 9.1 Linear Programs in Canonical Form. 9.2 Basic Feasible Solutions and Reduced Costs. 9.3 The Simplex Method. 9.4 Interpreting the Simplex Tableau. 9.5 Geometric Interpretation of the Simplex Method. 9.6 The Simplex Method for Upper Bounded Variables. 9.7 The Dual Simplex Method. 9.8 The Revised Simplex Method. 9.9 Notes. 9.10 Exercises. 10 Network Optimization Problems and Solutions. 10.1 Network Fundamentals. 10.2 A Class of Easy Network Problems. 10.3 Totally Unimodular Matrices. 10.4 The Network Simplex Method. 10.5 Solution via LINGO. 10.6 Notes. 10.7 Exercises. PART III SOLUTIONS. 11 Classical Solution Approaches. 11.1 Branch-and-Bound Approach. 11.2 Cutting Plane Approach. 11.3 Group Theoretic Approach. 11.4 Geometric Concepts. 11.5 Notes. 11.6 Exercises. 12 Branch-and-Cut Approach. 12.1 Introduction. 12.2 Valid Inequalities. 12.3 Cut Generating Techniques. 12.4 Cuts Generated from Sets Involving Pure Integer Variables. 12.5 Cuts Generated from Sets Involving Mixed Integer Variables. 12.6 Cuts Generated from 0 1 Knapsack Sets. 12.7 Cuts Generated from Sets Containing 0 1 Coefficients and 0 1 Variables. 12.8 Cuts Generated from Sets with Special Structures. 12.9 Notes. 12.10 Exercises. 13 Branch-and-Price Approach. 13.1 Concepts of Branch-and-Price. 13.2 Dantzig Wolfe Decomposition. 13.3 Generalized Assignment Problem. 13.4 GAP Example. 13.5 Other Application Areas. 13.6 Notes. 13.7 Exercises. 14 Solution via Heuristics, Relaxations, and Partitioning. 14.1 Introduction. 14.2 Overall Solution Strategy. 14.3 Primal Solution via Heuristics. 14.4 Dual Solution via Relaxation. 14.5 Lagrangian Dual. 14.6 Primal Dual Solution via Benders Partitioning. 14.7 Notes. 14.8 Exercises. 15 Solutions with Commercial Software. 15.1 Introduction. 15.2 Typical IP Software Components. 15.3 The AMPL Modeling Language. 15.4 LINGO Modeling Language. 15.5 MPL Modeling Language. REFERENCES. APPENDIX: ANSWERS TO SELECTED EXERCISES. INDEX. <s> BIB011
|
Axpb ð2Þ xX0 ð3Þ where x 2 R n is the unknown variable vector, A 2 R m  n is the coefficient matrix and b 2 R m , c 2 R n are coefficient vectors. The objective of the problem is to optimize (maximize or minimize) the linear objective function subject to constraints (2) and (3). If some or all of the variables are restricted to be integers, then, the LP problem is transformed into an IP problem as follows: Subject to Axpb ð5Þ xX0 ð6Þ x i integer for some or all i ¼ 1; 2; . . . ; n ð7Þ For example, a time-cost trade-off construction scheduling problem presented by LP model can be expressed as follows BIB002 : Subject to where C i is the cost of activity i; S i , D i and O i are the start time, duration and number of inequality constraints of activity i, respectively. D max is the maximum allowable overall project duration. C i min and D i min are the minimum cost and duration of activity i, respectively. M ij represents the slop of inequality constraint connecting the adjacent active options pair. n denotes the total number of all the activities. B ij is the intercept of cost for option j with respect to activity i. The objective of this problem is to minimize the overall project cost subject to constraints (9)- (14). Some typical methods, such as the simplex algorithm, Criss-cross algorithm and interior point methods can be utilized to solve the LP problem efficiently. A similar problem, represented by IP model, can be expressed as BIB002 : Subject to where X ij are decision variables assigned to option j, activity i. Here, the objective is to minimize the overall project cost subject to constraints (16)- (22). The decision variables X ij are restricted to be integers chosen between 0 and 1. Constraints (21) and (22) ensure that only one option is chosen for each activity during the optimization process. Several efficient approaches for solving the IP problems have been developed such as cutting-plane method, branch and bound method, branch and cut method, and branch and price method BIB011 . For more information of IP and LP, refer to BIB004 , BIB008 and . Mathematical methods for scheduling have received a considerable amount of attention due to their innate J Zhou et al-A review of methods and algorithms efficiency and accuracy. applied IP to solve the linear and discrete relationship of different activities within a scheduling optimization problem in a highway construction project that had a number of repetitive activities. Similarly, BIB005 developed an integer LP approach to solve the highway construction project using the resource leveling technique. The concepts of rate and activity float are introduced based on the resource utilization on a particular activity. If activities have common resources, then rate float can be used to achieve better resource utilization BIB005 . A disadvantage of this method is that the computational burden may grow tremendously as the problem size increases. In addition, this method has a single focus is single objective focus (ie, leveling the resources) and thus the maximization of production rates is not considered. Elazouni and Gab-Allah (2004) proposed an IP finance-based scheduling method to produce feasible schedules that balance the financing requirements of activities at any period with the cash available during that same period. The proposed method can be used to minimize total project duration and fulfil finance availability constraints BIB007 . Besides IP, LP is also applied in construction scheduling area especially for solving those problems with linear objective functions and constraints BIB006 . To overcome the single objective oriented limitation of traditional scheduling methods, such as CPM, BIB010 proposed a multi-objective LP model for scheduling linear repetitive projects that considers cost elements regarding the project's duration, the idle time of resources and the delivery time of the project's units. The proposed model is used to generate alternative schedules based on the relative magnitude and importance of the different cost elements BIB010 . The LP range sensitivity analysis can provide useful information regarding cost trade-offs between project and resource delays. It can provide managers with the capability to consider alternative schedules besides those defined by minimum duration or minimum resource work-breaks BIB009 . It is suggested that weights could be introduced into the multi-objective function so as to enhance the performance of the proposed method. developed a graphically based approach to assist in the LP of linear scheduling analysis, which they referred to as the Planning and Optimization for Linear Operations system. This system provides a graphic LP modelling environment in which model formulation can be accomplished in a graphic and interactive fashion. However, the solutions obtained are not guaranteed to be cost optimal. In addition, the method is currently only applicable to repetitive activities, that is, those one-off activities need to be dealt with separately. BIB003 propagated a hybrid optimization approach that integrates LP and IP for determining the time-cost trade-off solution of a construction scheduling problem. The method is applied in two stages: (1) using LP to generate a lower bound of the minimum direct curve; and (2) using IP to find the exact solutions. The proposed hybrid LP/IP method provides construction planners with an efficient way of analysing the time-cost trade-off problem. Dynamic programming. Dynamic programming is a mathematical method applicable for solving complex problems that can be broken down into some subproblems. It is efficient for solving those problems with overlapping sub-problems . Numerous examples of dynamic programming can be found in the construction engineering and management literature. For example, presented a dynamic programming approach to solve time-cost trade-off problems. BIB001 proposed a dynamic programming model by introducing a cost variable into the optimization process. The model performs the solution with two stages: 1. Forward process, which involves a time-cost trade-off analysis to determine local minimum conditions is employed; 2. Backward process, which involves simple scanning and selecting process that ensures an overall minimum state is attained. Formulating objective functions and constraints is a time-consuming and arduous task. Few construction planners are trained to obtain the required mathematical knowledge to perform such a formation and as such its application to construction and engineering project scheduling has been limited to date.
|
A Review of Methods and Algorithms for Optimizing Construction Scheduling <s> Heuristic methods <s> A network flow method is outlined for solving the linear programming problem of computing the least cost curve for a project composed of many individual jobs, where it is assumed that certain jobs must be finished before others can be started. Each job has an associated crash completion time and normal completion time, and the cost of doing the job varies linearly between these extreme times. Given that the entire project must be completed in a prescribed time interval, it is desired to find job times that minimize the total project cost. The~method solves this problem for all feasible time intervals. <s> BIB001 </s> A Review of Methods and Algorithms for Optimizing Construction Scheduling <s> Heuristic methods <s> This article describes an algorithm for efficiently shortening the duration of a project when the expected project duration exceeds a predetermined limit. The problem consists of determining which activities to expedite and by what amount. The objective is to minimize the cost of the project. ::: ::: This algorithm is considerably less complex than the analytic methods currently available. Because of its inherent simplicity, the algorithm is ideally suited for hand computation and also is suitable for computer solution. Solutions derived by the algorithm were compared with linear programming results. These comparisons revealed that the algorithm solutions are either a equally good or b nearly the same as the solutions obtained by more complex analytic methods which require a computer. ::: ::: With this method the CPM time-cost tradeoff problem is solved without access to a computer, thereby making this planning tool available to managers who otherwise would find implementation impractical. <s> BIB002 </s> A Review of Methods and Algorithms for Optimizing Construction Scheduling <s> Heuristic methods <s> This paper presents a new method for critical path (CPM) scheduling that optimizes project duration in order to minimize the project total cost. In addition, the method could be used to produce constrained schedules that accommodate contractual completion dates of projects and their milestones. The proposed method is based on the well-known "direct stiffness method" for structural analysis. The method establishes a complete analogy between the structural analysis problem with imposed support settlement and that of project scheduling with imposed target completion date. The project CPM network is replaced by an equivalent structure. The equivalence conditions are established such that when the equivalent structure is compressed by an imposed displacement equal to the schedule compression, the sum of all member forces represents the additional cost required to achieve such compression. To enable a comparison with the currently used methods, an example application from the literature is analyzed using the pr... <s> BIB003 </s> A Review of Methods and Algorithms for Optimizing Construction Scheduling <s> Heuristic methods <s> An alternative heuristic method for scheduling repetitive projects in which resources are limited and activities may be executed with multiple modes of resource demands associated with different durations is proposed. Unlike general heuristic methods that separately analyze each competing activity and schedule only one at a time, the proposed heuristic algorithm ranks possible combinations of activities every time and simultaneously schedules all activities in the selected combination leading to minimal project duration. All alternative combinations of activities in consideration of resource constraints, multiple modes and characteristics of the repetitive projects are determined through a permutation tree-based procedure. The heuristic method is implemented based on the corresponding framework. An example is presented to demonstrate the efficiency of the proposed heuristic method. The study is expected to provide an efficient heuristic methodology for solving the project scheduling problem. <s> BIB004 </s> A Review of Methods and Algorithms for Optimizing Construction Scheduling <s> Heuristic methods <s> Conventional project scheduling is restricted to single-skilled resource assumption where each worker is assumed to have only one skill. This, in effect, contradicts real-world practice where workers may possess multiple skills and, on several occasions, are assigned to perform tasks for which they are not specialized. Past research has shown a simple process of heuristic approach for multi-skilled resource scheduling where a project is planned under the assumption that each resource can have more than one skill and resource substitution is allowed. Nevertheless, the approach has presented resource substitution step where an activity with higher priority can claim any resource regardless of its concurrent activities' resource requirements. Furthermore, the approach is subjected to all-or-nothing resource assignment concept where an activity cannot start and resources are not needed for that activity at all unless the required resources of that activity can be completely fulfilled. This research presents an alternative heuristic approach for multi-skilled resource scheduling in an attempt to improve the resource substitution approach. Augmented resource substitution rule and resource-driven task duration are presented to increase starting opportunity of activities on earlier time. Case studies are presented to illustrate the improved result of shorter project duration. <s> BIB005
|
Heuristic methods are based on the past experience for problem solving. Prevalent heuristic methods include , Structural model , Siemens approximation BIB002 and structural stiffness BIB003 . developed a precedence methodology as an alternative to CPM. The method provides an effective manual process to determine a schedule instead of using a computer-based CPM. In Fondahl's approach, a 'circle and connecting line' diagram derived from process flow diagrams or flow-charts was used to address a number of issues such as the time-cost trade-off problem . Noteworthy, current project management software utilizes the manual calculation approach developed by Fondahl. On the basis of BIB001 ) method, Prager (1963 proposed a structure model to interpret the network flow formulation. The activities of a project and the progress towards its completion are described as jobs and events, respectively. Each job is represented by a structural member that consists of a rigid sleeve containing a compressible rod with a piston at its protruding end. The events are arranged between the jobs represented by thin rigid discs. Then, an algorithm is provided for the scheduling calculation. It is assumed that the normal and crash completion times are known for each job associating with linearly varied cost of completing the job between these times. A more complicate non-linear relationship between time and cost is not considered. BIB002 developed an algorithm that can effectively reduce the duration of a project when its expected duration exceeds a predetermined limit. The algorithm can shorten the project duration at minimum cost by determining which activity to expedite and by what amount. The proposed algorithm is simpler than some analytic methods (eg, LP), as it can be calculated and applied to time-cost trade-off problems without the use of a computer. However, the solution obtained by this algorithm cannot be guaranteed to be optimal. In fact, it is difficult to determine whether the obtained solution is optimal or not. BIB003 proposed a method for CPM scheduling that optimizes the project duration to minimize total cost. The method can be used to produce constrained scheduling that accommodates contractual completion dates of projects. The proposed method is based on the well-known 'direct stiffness method' for structural analysis, which establishes a complete analogy between the structural analysis problem with imposed support settlement and that of project scheduling with imposed target completion date. The CPM network is replaced by an equivalent structure whose compression is equivalent to the project schedule compression. The cost required to achieve such a compression is represented by the sum of all member forces BIB003 . Zhang et al (2006a) developed a heuristic method for scheduling the multiple-mode repetitive construction project subject to resource constraints. The method categorizes activities according to possible combinations into groups and schedules all the activities in the selected group simultaneously to minimize the project duration. A permutation tree-based procedure is employed to determine the alternative activity combinations. The heuristic algorithm ranks all alternative combinations of activities and selects the one leading to a minimal increase in project duration. A framework of the project scheduling system is constructed so as to implement the heuristic method BIB004 . Minimizing the project duration may reduce the indirect cost; however, it may also increase the direct cost or the overall cost that was not considered in this method. proposed a heuristic method for scheduling multiple projects subject to cash constraints. The method determines cash availability during a given period and identifies the schedules for all possible activities as well as the cash requirements for each schedule. The schedules are evaluated according to their impact on the project's duration. The influence of the activities on cash flow within the selected schedule is also determined. Comparison between the proposed approach and IP on a project with 15 networks and 60 activities shows that the solutions obtained using the proposed heuristic method are comparable to the optimum solutions ). This heuristic method can be easily integrated into management software to handle the project scheduling problem subject to finance constrained conditions ). However, a drawback of this method is that the computation effort grows exponentially as the number of eligible activities and the time span increase. modified the heuristic resourcescheduling solutions by introducing multi-skilled resources. The developed approach stores and utilizes information about the resource that can be substituted. Using this information, less utilized resources can be combined to substitute the constrained resources during the shortage period in order to reduce the project cost and time. To improve the resource substitution approach, BIB005 introduced an alternative heuristic for multi-skilled resource scheduling problem. An augmented resource substitution rule and resource-driven task duration are presented to increase the starting opportunity for activities. The limitation of the proposed method is that some other real-world resource substitution alternatives are not considered such as working overhead or temporary external workers. The method is only valid for start-to-finish relationships. In addition, minimization of the overall project cost is not considered in the aforementioned methods. Heuristic methods are non-computer approaches that require less computational effort than mathematical methods and can invariably be calculated using manual means. Owing to their simplicity, heuristic methods have been widely adopted to solve the CSO. However, since traditional heuristic methods can only optimize one objective, a global optimum is not guaranteed. Heuristic methods do not provide a pool of the possible solutions from which the construction planner may choose a suitable solution according to different construction scenarios. Being inefficient for solving the multi-objective scheduling problems poses a difficulty for their further applications. Heuristic methods are problem dependent and therefore cannot be generalized to all other cases.
|
A Review of Methods and Algorithms for Optimizing Construction Scheduling <s> Metaheuristic method <s> A new approach for resource scheduling using genetic algorithms (GAs) is presented here. The methodology does not depend on any set of heuristic rules. Instead, its strength lies in the selection and recombination tasks of the GA to learn the domain of the specific project network. By this it is able to evolve improved schedules with respect to the objective function. Further, the model is general enough to encompass both resource leveling and limited resource allocation problems unlike existing methods, which are class-dependent. In this paper, the design and mechanisms of the model are described. Case studies with standard test problems are presented to demonstrate the performance of the GA-scheduler when compared against heuristic methods under various resource availability profiles. Results obtained with the proposed model do not indicate an exponential growth in the computational time required for larger problems. <s> BIB001 </s> A Review of Methods and Algorithms for Optimizing Construction Scheduling <s> Metaheuristic method <s> Time-cost trade-off analysis is one of the most important aspects of construction project planning and control. There are trade-offs between time and cost to complete the activities of a project; in general, the less expensive the resources used, the longer it takes to complete an activity. Using critical path method (CPM), the overall project cost can be reduced by using less expensive resources for noncritical activities without impacting the project duration. Existing methods for time-cost trade-off analysis focus on using heuristics or mathematical programming. These methods, however, are not efficient enough to solve large-scale CPM networks (hundreds of activities or more). Analogous to natural selection and genetics in reproduction, genetic algorithms (GAs) have been successfully adopted to solve many science and engineering problems and have proven to be an efficient means for searching optimal solutions in a large problem domain. This paper presents: (1) an algorithm based on the principles of GAs for construction time-cost trade-off optimization; and (2) a computer program that can execute the algorithm efficiently. <s> BIB002 </s> A Review of Methods and Algorithms for Optimizing Construction Scheduling <s> Metaheuristic method <s> Time-cost optimization problems in construction projects are characterized by the constraints on the time and cost requirements. Such problems are difficult to solve because they do not have unique solutions. Typically, if a project is running behind the scheduled plan, one option is to compress some activities on the critical path so that the target completion time can be met. As combinatorial optimization problems, time-cost optimization problems are suitable for applying genetic algorithms (GAs). However, basic GAs may involve very large computational costs. This paper presents several improvements to basic GAs and demonstrates how these improved GAs reduce computational costs and significantly increase the efficiency in searching for optimal solutions. <s> BIB003 </s> A Review of Methods and Algorithms for Optimizing Construction Scheduling <s> Metaheuristic method <s> This paper compares two evolutionary computation paradigms: genetic algorithms and particle swarm optimization. The operators of each paradigm are reviewed, focusing on how each affects search behavior in the problem space. The goals of the paper are to provide additional insights into how each paradigm works, and to suggest ways in which performance might be improved by incorporating features from one paradigm into the other. <s> BIB004 </s> A Review of Methods and Algorithms for Optimizing Construction Scheduling <s> Metaheuristic method <s> Resources for construction activities are limited in the real construction world. To avoid the waste and shortage of resources on a construction jobsite, scheduling must include resource allocation. A multicriteria computational optimal scheduling model, which integrates the time/cost trade-off model, resource-limited model, and resource leveling model, is proposed. A searching technique using genetic algorithms (GAs) is adopted in the model. Furthermore, the nondominated solutions are found by the multiple attribute decision-making method, technique for order preference by similarity to ideal solution. The model can effectively provide the optimal combination of construction durations, resource amounts, minimum direct project costs, and minimum project duration under the constraint of limited resources. <s> BIB005 </s> A Review of Methods and Algorithms for Optimizing Construction Scheduling <s> Metaheuristic method <s> In the management of a construction project, the project duration can often be compressed by accelerating some of its activities at an additional expense. This is the so-called time-cost trade-off ... <s> BIB006 </s> A Review of Methods and Algorithms for Optimizing Construction Scheduling <s> Metaheuristic method <s> Existing genetic algorithms (GA) based systems for solving time-cost trade-off problems suffer from two limitations. First, these systems require the user to manually craft the time-cost curves for formulating the objective functions. Second, these systems only deal with linear time-cost relationships. To overcome these limitations, this paper presents a computer system called MLGAS (Machine Learning and Genetic Algorithms based System), which integrates a machine learning method with GA. A quadratic template is introduced to capture the nonlinearity of time-cost relationships. The machine learning method automatically generates the quadratic time-cost curves from historical data and also measures the credibility of each quadratic time-cost curve. The quadratic curves are then used to formulate the objective function that can be solved by the GA. Several improvements are made to enhance the capacity of GA to prevent premature convergence. Comparisons of MLGAS with an experienced project manager indicate that MLGAS generates better solutions to nonlinear time-cost trade-off problems. <s> BIB007 </s> A Review of Methods and Algorithms for Optimizing Construction Scheduling <s> Metaheuristic method <s> Resource allocation and leveling are among the top challenges in project management. Due to the complexity of projects, resource allocation and leveling have been dealt with as two distinct subproblems solved mainly using heuristic procedures that cannot guarantee optimum solutions. In this paper, improvements are proposed to resource allocation and leveling heuristics, and the Genetic Algorithms (GAs) technique is used to search for near-optimum solution, considering both aspects simultaneously. In the improved heuristics, random priorities are introduced into selected tasks and their impact on the schedule is monitored. The GA procedure then searches for an optimum set of tasks' priorities that produces shorter project duration and better-leveled resource profiles. One major advantage of the procedure is its simple applicability within commercial project management software systems to improve their performance. With a widely used system as an example, a macro program is written to automate the GA proced... <s> BIB008 </s> A Review of Methods and Algorithms for Optimizing Construction Scheduling <s> Metaheuristic method <s> SUMMARY Reliable construction schedule is vital for effective co-ordination across supply chains and various trades at construction work face. According to the lean construction concept, reliability of the schedule can be enhanced through detection and satisfaction of all potential constraints prior to releasing operation assignments. However, it is difficult to implement this concept since current scheduling tools and techniques are fragmented and designed to deal with a limited set of construction constraints. This paper introduces a methodology termed ‘multi-constraint scheduling’ in which four major groups of construction constraints including physical, contract, resource, and information constraints are considered. A Genetic Algorithm (GA) has been developed and used for multi-constraint optimisation problem. Given multiple constraints such as activity dependency, limited working area, and resource and information readiness, the GA alters tasks’ priorities and construction methods so as to arrive at optimum or near optimum set of project duration, cost, and smooth resource profiles. This feature has been practically developed as an embedded macro in MS Project. Several experiments confirmed that GA can provide near optimum solutions within acceptable searching time (i.e. 5 minutes for 1.92E11 alternatives). Possible improvements to this research are further suggested in the paper. <s> BIB009 </s> A Review of Methods and Algorithms for Optimizing Construction Scheduling <s> Metaheuristic method <s> This paper presents an augmented Lagrangian genetic algorithm model for resource scheduling. The algorithm considers scheduling characteristics that were ignored in prior research. Previous resource scheduling formulations have primarily focused on project duration minimization. Furthermore, resource leveling and resource-constrained scheduling have traditionally been solved independently. The model presented here considers all precedence relationships, multiple crew strategies, total project cost minimization, and time-cost trade-off. In the new formulation, resource leveling and resource-constrained scheduling are performed simultaneously. The model presented uses the quadratic penalty function to transform the resource-scheduling problem to an unconstrained one. The algorithm is general and can be applied to a broad class of optimization problems. An illustrative example is presented to demonstrate the performance of the proposed method. <s> BIB010 </s> A Review of Methods and Algorithms for Optimizing Construction Scheduling <s> Metaheuristic method <s> Reducing both project cost and time (duration) is critical in a competitive environment. However, a trade-off between project time and cost is required. This in turn requires contracting organizations to carefully evaluate various approaches to attaining an optimal time-cost equilibrium. Although several analytical models have been developed for time-cost optimization (TCO), they mainly focus on projects where the contract duration is fixed. The optimization objective in those cases is therefore restricted to identifying the minimum total cost only. With the increasing popularity of alternative project delivery systems, clients and contractors are targeting the increased benefits and opportunities of seeking an earlier project completion. The multiobjective model for TCO proposed in this paper is powered by techniques using genetic algorithms (GAs). The proposed model integrates the adaptive weights derived from previous generations, and induces a search pressure toward an ideal point. The concept of the GA-based multiobjective TCO model is illustrated through a simple manual simulation, and the results indicate that the model could assist decision-makers in concurrently arriving at an optimal project duration and total cost. <s> BIB011 </s> A Review of Methods and Algorithms for Optimizing Construction Scheduling <s> Metaheuristic method <s> Time–cost optimization (TCO) is one of the greatest challenges in construction project planning and control, since the optimization of either time or cost, would usually be at the expense of the other. Although the TCO problem has been extensively examined, many research studies only focused on minimizing the total cost for an early completion. This does not necessarily convey any reward to the contractor. However, with the increasing popularity of alternative project delivery systems, clients and contractors are more concerned about the combined benefits and opportunities of early completion as well as cost savings. In this paper, a genetic algorithms ( GAs ) -driven multiobjective model for TCO is proposed. The model integrates the adaptive weight to balance the priority of each objective according to the performance of the previous “generation.” In addition, the model incorporates Pareto ranking as a selection criterion and the niche formation techniques to improve popularity diversity. Based on the pr... <s> BIB012 </s> A Review of Methods and Algorithms for Optimizing Construction Scheduling <s> Metaheuristic method <s> This paper introduces a methodology for solving the multimode resource-constrained project scheduling problem (MRCPSP) based on particle swarm optimization (PSO). The MRCPSP considers both renewable and nonrenewable resources that have not been addressed efficiently in the construction field. The framework of the PSO-based methodology is developed with the objective of minimizing project duration. A particle representation formulation is proposed to represent the potential solution to the MRCPSP in terms of priority combination and mode combination for activities. Each particle-represented solution should be checked against the nonrenewable resource infeasibility and will be handled by adjusting the mode combination. The feasible particle-represented solution is transformed to a schedule through a serial generation scheme. Experimental analyses are presented to investigate the performance of the proposed methodology. Comparisons with other methods show that the PSO method is equally efficient at solving the MRCPSP. <s> BIB013 </s> A Review of Methods and Algorithms for Optimizing Construction Scheduling <s> Metaheuristic method <s> Time and cost are the most important factors to be considered in every construction project. In order to maximize the return, both the client and contractor would strive to optimize the project duration and cost concurrently. Over the years, many research studies have been conducted to model the time–cost relationships, and the modeling techniques range from the heuristic methods and mathematical approaches to genetic algorithms. Despite that, previous studies often assumed the time being constant leaving the analyses based purely on a single objective—cost. Acknowledging the significance of time–cost optimization, an evolutionary-based optimization algorithm known as ant colony optimization is applied to solve the multiobjective time–cost optimization problems. In this paper, the basic mechanism of the proposed model is unveiled. Having developed a program in the Visual Basic platform, tests are conducted to compare the performance of the proposed model against other analytical methods previously used fo... <s> BIB014 </s> A Review of Methods and Algorithms for Optimizing Construction Scheduling <s> Metaheuristic method <s> This paper develops a new method for scheduling repetitive construction projects with several objectives such as project duration, project cost, or both of them. The method deals with constraints of precedence relationships between activities, and constraints of resource work continuity. The method considers different attributes of activities (such as activities which allow or do not allow interruptions), and different relationships between direct costs and durations for activities (such as linear, non-linear, continuous, or discrete relationship) to provide a satisfactory schedule. In order to minimize the mentioned objectives, the proposed method finds a set of suitable durations for activities by genetic algorithm, and then determines the suitable start times of these activities by a scheduling algorithm. The bridge construction example from literature is analyzed to validate the proposed method, and another example is also given to illustrate its new capability in project planning. <s> BIB015 </s> A Review of Methods and Algorithms for Optimizing Construction Scheduling <s> Metaheuristic method <s> Since scheduling of multiple projects is a complex and time-consuming task, a large number of heuristic rules have been proposed by researchers for such problems. However, each of these rules is usually appropriate for only one specific type of problem. In view of this, a hybrid of genetic algorithm and simulated annealing (GA-SA Hybrid) is proposed in this paper for generic multi-project scheduling problems with multiple resource constraints. The proposed GA-SA Hybrid is compared to the modified simulated annealing method (MSA), which is more powerful than genetic algorithm (GA) and simulated annealing (SA). As both GA and SA are generic search methods, the GA-SA Hybrid is also a generic search method. The random-search feature of GA, SA and GA-SA Hybrid makes them applicable to almost all kinds of optimization problems. In general, these methods are more effective than most heuristic rules. Three test projects and three real projects are presented to show the advantage of the proposed GA-SA Hybrid method. It can be seen that GA-SA Hybrid has better performance than GA, SA, MSA, and some most popular heuristic methods. <s> BIB016 </s> A Review of Methods and Algorithms for Optimizing Construction Scheduling <s> Metaheuristic method <s> Time–cost trade-off analysis is addressed as an important aspect of any construction project planning and control. Nonexistence of a unique solution makes the time–cost trade-off problems very difficult to tackle. As a combinatorial optimization problem one may apply heuristics or mathematical programming techniques to solve time–cost trade-off problems. In this paper, a new multicolony ant algorithm is developed and used to solve the time–cost multiobjective optimization problem. Pareto archiving together with innovative solution exchange strategy are introduced which are highly efficient in developing the Pareto front and set of nondominated solutions in a time–cost optimization problem. An 18-activity time–cost problem is used to evaluate the performance of the proposed algorithm. Results show that the proposed algorithm outperforms the well-known weighted method to develop the nondominated solutions in a combinatorial optimization problem. The paper is more relevant to researchers who are interested i... <s> BIB017 </s> A Review of Methods and Algorithms for Optimizing Construction Scheduling <s> Metaheuristic method <s> An issue has arisen with regard to which of the schedule generation schemes will perform better for an arbitrary instance of the resource-constrained project scheduling problem (RCPSP), which is one of the most challenging areas in construction engineering and management. No general answer has been given to this issue due to the different mechanisms between the serial scheme and the parallel scheme. In an effort to address this issue, this paper compares the two schemes using a permutation-based Elitist genetic algorithm for the RCPSP. Computational experiments are presented with multiple standard problems. From the results of a paired difference experiment, the algorithm using the serial scheme provides better solutions than the one using the parallel scheme. The results also show that the algorithm with the parallel scheme takes longer to solve each problem than the one using the serial scheme. <s> BIB018 </s> A Review of Methods and Algorithms for Optimizing Construction Scheduling <s> Metaheuristic method <s> In this paper, the intelligent optimization methods including genetic algorithm (GA), particle swarm optimization (PSO) and modified particle swarm optimization (MPSO) are used in optimizing the project scheduling of the first mining face of the second region of the fifth Ping'an coal mine in China. The result of optimization provides essential information of management and decision-making for governors and builder. The process of optimization contains two parts: the first part is obtaining the time parameters of each process and the network graph of the first mining face in the second region by PERT (program evaluation and review technique) method based on the raw data. The other part is the second optimization to maximal NPV (net present value) based on the network graph. The starting dates of all processes are decision-making variables. The process order and time are the constraints. The optimization result shows that MPSO is better than GA and PSO and the optimized NPV is 14,974,000 RMB more than the original plan. <s> BIB019 </s> A Review of Methods and Algorithms for Optimizing Construction Scheduling <s> Metaheuristic method <s> Research on a new metaheuristic for optimization is often initially focused on proof of concept applications. Time and cost are the most important factors to be considered in every construction project. Over the years, many research studies have been conducted to model the time-cost relationship. Construction planners often face the challenge of optimum resources utilization to compromise between different and usually conflicting aspects of projects. Time, cost and quality of project delivery are among the crucial aspects of each project. Ant colony optimization, which was introduced in the early 1990’s as a novel technique for solving hard combinational optimization problem, finds itself currently at this point of its life cycle. In this paper, new metaheuristic multi-colony ant algorithm is developed for the optimization of three objectives time-cost quality with quantity as a trade off problem. The model is also applied to two objectives time – cost trade off problem and the results are compared to those of the existing approaches. <s> BIB020
|
Metaheuristic methods are used for solving combinatorial optimization problems whose optimal solution is over a discrete search space. The metaheuristic method can improve a candidate solution by iterative computation with regard to a given criterion without making too many assumptions about the problem at hand. Popular metaheuristic methods used are naturally inspired methods that have been developed from natural behaviours. The purpose of using naturally inspired methods (eg, genetic algorithm (GA), ant colony optimization (ACO) and particle swarm optimization (PSO)) is to mimic natural processes to ensure optimal solutions. Among these methods, the GA has become the most popularly used approach for addressing the CSP in the construction and engineering literature. Genetic algorithm. GA belongs to the larger class of evolutionary algorithms (EA) that solves optimization problems using techniques based on natural evolution. Other members of EA include genetic programming, evolutionary programming and evolution strategy. A GA is a random searching algorithm based on the mechanism of natural selection and survival of the fittest. The three most important phases involved in GA are selection, crossover and mutation (Figure 2 ). To utilize GA, all the decision variables, for example options for each construction activities, are encoded into a string called a chromosome whose genes are represented by binary digits, integers or real numbers. Then, an initial population is chosen randomly and each chromosome's fitness is evaluated with regard to the objective function. According to the fitness, a selection method is employed and a candidate population is created for crossover that allows information exchanges between parents to generate new offspring. In the mutation phase, genes are altered on some randomly chosen locus to eliminate the premature problem caused in the crossover. Then, a new population is generated for the next round iteration. The GA is an efficient global parallel searching algorithm that can accumulate information from the searching space and obtain an optimal or suboptimal solution adaptively. In complex projects, resource allocation and leveling are invariably dealt with as two distinct sub-problems solved predominately using heuristics that cannot guarantee optimum solutions. BIB001 used a GA-scheduler to deal with construction resource scheduling problem. The proposed GA-scheduler is capable of resource leveling and limited resource allocation without using any heuristic rules. Two types of constraints were considered: hard and soft. The hard constraints cannot be violated or relaxed. The soft constraints can be relaxed to some extent with a penalty on the performance. The GA-scheduler can obtain solutions that have at least the same project duration or even shorter duration than those solutions generated by heuristic methods. Moreover, computation effort of using the GA-scheduler does not exponentially increase. BIB001 also recommended the use of local optimization or greedy algorithm for a quick search of the vicinity of the GA-produced solution to locate the nearby optimum. Due to the insufficiency of the heuristic and mathematical programming methods for solving the large scale CPM network problems, BIB002 employed a GA to solve construction time-cost trade-off problem using Pareto Optimality. For a multi-objective optimization problem, Pareto optimality denotes the situation that no further improvement can be made to an individual objective without sacrificing at least one of the other objectives. Pareto Front is a set of solutions satisfying the conditions of Pareto optimality based on which the designers could make trade-off decisions. Using GA and Pareto Front, they proposed an algorithm for solving the construction time-cost trade-off problem. The algorithm shows its efficiency by searching only a small fraction of the total searching space BIB002 . However, the method is only applicable to the finish-to-start relationship within the activities, and it is also unable to deal with limited resources. BIB005 suggested a GA-based multicriteria computational optimal model for the CSO that integrates the time-cost trade-off, resource limit and resource leveling models. To overcome the computational inefficiency due to repair, a new crossover operator, UX3, is developed, which takes into account activity precedence relationships when writing characters from substrings into offspring strings. A multiple attribute decision-making method is employed to find the non-dominated solutions. To reduce the computational effort involved by using a GA, BIB003 proposed an improved GA for facility time-cost scheduling optimization. An improved crossover operator was introduced to ensure that the offsprings are feasible solutions. An improved mutation operation was also introduced to adjust the crashing time such that the constraints are met. A limitation of the method proposed by BIB003 is that crash times are treated as continuous variables, which can be impractical. Furthermore, BIB003 did not consider resource constrained situations. In addressing the shortcomings of Li and Love (1997), BIB006 developed an approach that integrated GA and the commercial scheduling software Microsoft Project 4.1 to deal with construction time-cost trade-off scheduling problem. Using the CPM engine and other functions such as resource leveling embedded in the software, resource availability is considered during the evolutionary computation process. The developed method considers project deadline, daily incentive, daily liquidated damages, and daily indirect cost into its formulation and uses total cost as the objective function. Due to its random nature, a considerable amount of computation time is required for large network problems. BIB007 stated that the traditional GA-based system for solving time-cost trade-off problems suffers from the following limitations as the: K objective function is formulated manually based on the time-cost curves; and K system only deal linear time-cost relationship. To overcome these limitations, BIB007 developed a computation method integrating machine learning and GA by which a quadratic template is introduced to capture the non-linearity of time-cost relationships. A quadratic time-cost curve is generated from historical data and used to formulate the objective function that can be solved by the GA. Improved crossover and mutation operators were also used to enhance the computation speed. In BIB007 , a quadratic time-cost relationship was considered, though this may not be appropriate in more complex projects. Further improvement should be made to interpret the non-linear relationship between time and cost. BIB008 used GA to search near-optimum solution for the resources allocation and leveling problems simultaneously. Random priorities were introduced into selected tasks and their impact on the schedule is monitored. In this instance, GA is able to search for an optimal set of tasks' priorities that produces shorter project duration and better-leveled resource profiles. A major advantage of the method is its simplicity and as a result can be integrated into commercial project management software. BIB010 presented an augmented Lagrangian GA model for construction resource scheduling problem. The proposed model considers several issues such as precedence relationships, multiple crew strategies, total project cost minimization and time-cost trade-off. In this model, resource leveling and resource-constrained scheduling problems are performed simultaneously. By taking consideration of continuous linear and non-linear cost-duration curves and resource-duration curves, an objective function, minimizing the total project cost, was formulated subject to several constraints. Then, a quadratic penalty function was used to transform the resource scheduling problem to an unconstrained problem to facilitate the application of GA, which could yield optimal or suboptimal solutions. BIB009 used a GA to solve construction scheduling problem considering the following constraints physical, contract, resource and information. The proposed GA approach can alter task's priorities and construction method so as to achieve optimal or suboptimal solution. Microsoft Project software was chosen to implement the GA system and interfaces were developed using Visual Basic for Applications language. The study considered multi-objectives (ie, duration, cost, and resource and space utilization), which can be optimized simultaneously by using multi-objective weighting method. This is a useful and simple method to deal with multi-objective problems. However, since the weights are predetermined, system performance can be affected by those dominant objects. Choosing appropriate weights pose difficulties for the applications of this method. BIB011 developed a multi-objective model for construction time-cost optimization problems. The feature of the method is to solve the trade-off problem between time and cost so as to minimize them simultaneously. For dealing with the multi-objective problem, a modified adaptive weight approach (MAWA) was proposed that can adjust the scope of the next search according to the performance of the current population in obtaining an optimum BIB011 . A new fitness function was also proposed in accordance with the proposed MAWA. The modified adaptive weights can guide the algorithm to search a larger space and increase the diversity of exploration. To overcome the weakness of the 'roulette wheel' selection method used by traditional GA, BIB012 employed a Pareto ranking approach for the selection phase of the GA. With Pareto ranking, all nondominated solutions in current population are grouped and ranked. The group with the higher rank will have a greater chance to survive. It ensures equal reproductive probability among non-dominated solutions on the same level BIB012 . A Niche formation technique was also introduced to enhance the population diversity. In this approach, the resources are assumed to be unlimited with a condition that an extra amount can only be obtained at a higher price, which, however, may be impractical in practice. El-Rayes and Kandil (2005) proposed a GA-based approach to solve a highway construction scheduling problem. A new objective, 'quality', was introduced that transformed the traditional time-cost trade-off problem to a time-cost-quality trade-off problem. The objective of the optimization problem is to minimize construction time and cost, while maximize its quality. A number of measurable quality indicators for each activity in the project were introduced in order to quantify the construction quality. Pareto optimality and Niche comparison rule were introduced for GA computation. Long and Ohsato (2009) developed a GA-based method for scheduling repetitive construction projects considering project duration, cost or both of them. Resource constraints, different attributes activities and different relationship between direct cost and duration were considered. Unlike the previous research, which aimed to balance the cost of delay with the cost of discontinuities, BIB015 extended the repetitive CSO to a non-linear and complex optimization problem by presenting a nonlinear combined performance index according to the deviations from the minimum cost and duration. A twostage sub-procedure (SP1) was employed to evaluate the fitness function for the GA computation. The proposed approach could assist the planners to make alternative resource selection decisions to minimize project duration and cost. However, Long and Ohsato (2009) did not consider that several crews can work simultaneously and therefore only deal with deterministic information. In practice, the actual time and cost for each activity in construction option may be uncertain. This would make it very difficult for the managers to make a decision. To overcome this difficulty, proposed an optimization approach dealing with construction time-cost trade-off problem with uncertainties. Fuzzy theory was introduced by which the project time and cost are assigned with fuzzy numbers. Then, GA was applied to solve the optimization problem. The proposed model has the ability to adapt to deterministic and uncertain environment by using the a cut method. New nondominated solutions can be obtained using the a cut property according to the project manager's acceptance of risk level. Kim and Ellis (2008) used a permutation-based elitist GA to solve large-scale resource constrained construction scheduling problem. A random number generator was developed that could generate a feasible precedence permutation such that an initial population was created for the possible solutions of the scheduling problem. Then, an elitist strategy was adopted that can preserve the best individual generated from the previous generation into the current generation so as to prevent the loss of the best found solutions. Three different termination conditions, that is, number of generations, timeout and number of unique schedules, were used to achieve the finial solution. However, it has been noticed that this method does not work well on large-sized problems. By using the proposed permutation-based elitist GA, BIB018 compared the efficiencies of two schedule generation schemes, that is, serial scheme and parallel scheme, on decoding the schedule representation into a schedule for resource constrained project. Experimental results demonstrate that the serial scheme is superior to the parallel scheme, which consumes more time for solving each problem. Chen and Weng (2009) proposed a two-phase GA module for resource constrained project scheduling problem. Concerning the construction constrains, such as precedence relationship, resource requirements and availability, interruption and overlapping of activities, a twophase approach was developed. In the first stage, a GA-based time-cost trade-off analysis was adopted to select the schedule for each activity. Then, a GA-based resource scheduling method was used to generate a feasible solution satisfying the project constraints. A hybrid GA and simulated annealing module was proposed by BIB016 for solving the multi-project scheduling problem subject to multi-resource constraints. Involvement of the simulated annealing contributes to the selection phase of GA by proposing a new fitness function. As the number of generation increases, the fitness value would increase and induce the algorithm to choose better-fitted solutions. The mutation rate would also decrease as the number of generations grows BIB016 ). The objective of this approach is to minimize the largest finish time of the activities, which does not consider the project cost factors. On the basis of its random searching mechanism, the GA could solve a variety of optimization problems by searching a larger solution space. Selection of the fitness function is crucial for GA computation. An improperly selected fitness function may cause the algorithm to be trapped into local optimal. It is also difficult for GA to deal with those problems with dynamic data, for example, uncertain construction activities that may be changed frequently. The algorithm may converge to a solution that probably may not work well for the later data. The size of the initial population is also important for the operation of the algorithm. A large population will greatly increase the computation effort, while on the other hand a small population may cause the missing of the optimal solution. Due to its random searching mechanism, GA can always find a better solution compared with the other solutions. However, the solution obtained with GA cannot be guaranteed global optimal. As a result, it is difficult to determine the stopping criterion of the algorithm. Ant colony optimization. Ethnologists have revealed that ants can find the shortest path between their nest and food sources. They discovered that, when ants are searching for food, they lay down pheromones to indicate the path for each other. The pheromone will dissipate over time; however, it increases when other ants travel on the same path. The following ants are willing to choose the path with more pheromones, which leads all ants converging to the same path. ACO is an efficient method for solving combinatorial optimization problems and was founded based on the behaviours of real ants. To implement ACO for solving the construction scheduling problem, we may represent the problem by a weighted network graph. Initial pheromones should be assigned to each edge within the network so as to start the first searching. Then, according to the pheromone information, selection probabilities are determined based on which an artificial ant could travel from the first to the last activity such that the entire project is finished. When the ant travels on the path, pheromones will be updated to the options chosen by the ant to finish the project. Then, a next round iteration starts until the stopping criterion is met. A number of researchers in construction and engineering have adopted ACO to address time-cost trade-off problems (eg, Ng and Zhang, 2008; BIB017 Lakshminarayanan et al, 2010) . The ACO algorithm consists of four elements: 1. Construction solution represents the travelling of an ant through all the activities so as to finish the project. 2. Selection probability determines which node is to be selected based on the pheromone information. 3. Updated pheromone rule memorizes the path when an ant finishes its trip and pheromone will be added to the activities chosen by the ant. 4. A stopping criterion is set to stop the optimization procedure. Ng and Zhang (2008) adopted the modified adaptive fitness function, which can be used to evaluate the project time and cost. Two updating rules, that is, a local updating rule and a global updating rule, are presented for the pheromone information updating. BIB014 experiment demonstrated that the ACO approach can generate a better result than GA by reducing cost with the same duration. When using the ACO approach there is, however, a tendency for premature convergence to occur. Methods for searching the local space adjacent to the solution should be considered so as to obtain a global optimum. Another limitation is that there is no existing criterion for choosing those parameters within the algorithm. Methods that can contribute to selecting the parameters can assist with the implementation of ACO such as neural network and machine learning method. Afshar et al (2009) developed a non-dominated archiving multi-colony ant algorithm to solve the construction time-cost trade-off optimization problem. A colony of agents is assigned to each objective. Both colonies have the same number of ants and arbitrary objective orders. The solution found in the first colony is transferred to the second colony for evaluation, and the new solution will be transferred back to the first colony for the next iteration cycle. After a number of iterations, the non-dominated solutions are transferred to an external archive where they are compared with each other so as to exclude the dominated solutions. The 'Pareto front' will be obtained after a predetermined number of iterations. Experimental research conducted by BIB017 demonstrates that when the number of non-dominated solutions increases, the proposed method can achieve better solutions than the weighting method adopted by the traditional single colony system. Lakshminarayanan et al (2010) used ACO to solve the extended time-cost-risk trade-off problem of construction scheduling. On the basis of the time-cost trade-off problem, an objective function of the project risk with regard to the utilization of each activity was introduced by using a set of quality indicators. The risk associated with construction project was classified and grouped into a number of zones based on the severity of the risks. The problem was solved by ACO using a test construction project. Similarly, BIB020 proposed a multi-objective optimization approach for time-costquality-quantity trade-off problem of construction scheduling based on ACO. The objective functions were derived by quantifying the duration, total cost and performance quality. Then, a multi-colony ant system was utilized to solve a test problem introduced by BIB002 . In this multi-objective optimization problem, the weighting parameters chosen by the authors for each objective are 10, 10000 and 0.0005. However, how to determine those parameters is not specified. Ant colony is a powerful tool for solving the combinatorial optimization problem. However, several problems should be considered and studied extensively for better applications of this method, for example, the premature convergence phenomenon, the stopping criterion and the parameter determining method. Particle swarm optimization. PSO is a computational method that can solve the optimization problem by iteratively improving the performance of the solutions according to a given objective measurement. By having a population of candidate particles, PSO can search for the optimal solution by moving around its particles in a D dimensional searching space. The position of each particle in the D dimensional space can be expressed by X i t ¼ {x i1 t , x i2 t , . . . , x iD t }, i ¼ 1, 2, . . . , M, M is the population size; t ¼ 1, 2, . . . , T represents the generation and T is the iteration limit. Similarly, the particle speed can be expressed by V i t ¼ {v i1 t , v i2 t , . . . , v iD t }, i ¼ 1, 2, . . . , M and t ¼ 1, 2, . . . , T. The position of each particle is a potential solution to the problem that would be evaluated according to a given objective function. The speed and position of the particles can be updated using the following mathematical formula BIB004 : where w is an inertia weighting parameter, rand( ) represents a random number between 0 and 1, c 1 and c 2 are positive learning factors, p id is the local best solution obtained by the ith particle after tÀ1 iterations and p gd t represents the global best solution achieved so far. From (23) and (24), we could see that the particle speed and position are updated iteratively based on the knowledge of the local and global best solutions obtained. As a metaheuristic method, PSO needs few or no assumptions of the problem to be solved and can search a large space for the candidate solutions, which makes it efficient for solving the combinatorial optimization problems. initialized the application of PSO to the CSO problem. A PSO-based approach was proposed to solve the resource constrained project scheduling problem with the objective of minimizing the project duration. To develop a feasible schedule, a particle representation method of activity priorities was adopted that is able to avoid the infeasible sequences from current particle positions. Then, a parallel scheme was used to decode the particles to a feasible schedule according to the precedence and resource constraints. The above two steps form the framework of PSO for solving the resource constrained project scheduling problem. In this study, only one objective, that is, project duration, was considered. While, another important factor, time, was not taken into account. BIB013 extended the application of PSO to a multi-mode resource constrained project scheduling problem considering both renewable and non-renewable resources. A pair of particle positions was adopted to represent one potential solution by indicating the priority combination and mode combination. applied PSO to a preemptive CSP under break and resource constraints with the objective of minimizing the project duration. In this study, preemptive activities that can be interrupted during off-working time (eg, night) were considered. All the resources shared by multi-activities were reallocated during the break. The interrupted activities will not restart immediately after the break due to the resource reallocation. The scheduling priority was represented by multidimensional position of the particle. Then, the problem is solved by PSO and a parallel scheme was adopted to transform the priority to a feasible schedule. In this study, only single mode resources were considered. It will be an interesting future research topic if multiple-mode resource constraints are involved in among different activities. BIB019 proposed a modified PSO for solving the CSP for underground mining at the coalface. The modified PSO was developed based on the traditional PSO by introducing a new crossover operator that operates on those coupled particles selected from half of the particle population. The newly created children with better fitness compared with their parents will be chosen for the next iteration. The optimization process consists of two stages. In the first stage, PERT was utilized to derive the time parameters and network graph based on the raw data. In the second stage, the modified PSO was used to optimize the net present value based on the network graph. GA and traditional PSO were also utilized to solve the same problem for comparison. Experimental results demonstrated that the modified PSO is superior to GA and traditional PSO. PSO is an efficient algorithm for solving the combinatorial optimization problems. However, the solution obtained by PSO is not necessarily a global or local optimum. The selection of the PSO parameters could cause convergence, divergence and oscillation of the particles. To date, PSO parameters are mainly selected based on empirical results. When population variety descends, the particle speed will decrease which, in turn, reduces the capability of the algorithm for searching feasible solutions.
|
A Review of Methods and Algorithms for Optimizing Construction Scheduling <s> Conclusion and future research <s> In this paper, a computational approach based on a new exact penalty ::: function method is devised for solving a class of continuous ::: inequality constrained optimization problems. The continuous ::: inequality constraints are first approximated by smooth function in ::: integral form. Then, we construct a new exact penalty function, ::: where the summation of all these approximate smooth functions in ::: integral form, called the constraint violation, is appended to the ::: objective function. In this way, we obtain a sequence of approximate ::: unconstrained optimization problems. It is shown that if the value ::: of the penalty parameter is sufficiently large, then any local ::: minimizer of the corresponding unconstrained optimization problem is ::: a local minimizer of the original problem. For illustration, three ::: examples are solved using the proposed method. From the solutions ::: obtained, we observe that the values of their objective functions ::: are amongst the smallest when compared with those obtained by other ::: existing methods available in the literature. More importantly, our ::: method finds solution which satisfies the continuous inequality ::: constraints. <s> BIB001
|
The CSO has been examined using an array of methods and algorithms. The original single objective optimization problems have been extended to multi-objective trade-off optimization problems subject to various construction constraints. A number of methodologies that have been applied to solve the CSO problem can be classified into three categories: mathematical method, heuristic method and metaheuristic method. To implement the mathematical method, the problem needs to be explicitly formulated (ie, the objective function and constraints). This is a time-consuming task and difficult for construction planners who do not have the specified mathematical knowledge and background. Some mathematical searching algorithms, for example hill climbing algorithm, are single objective oriented and likely to be trapped into local optimality. Therefore, methodologies, by which a global optimality can be obtained, are highly demanded. Constraints are critical factors for solving the CSO problem. Traditional mathematical methods usually treat constraints and objective function separately, that is, optimizing the objective function subject to the constraints. To solve such a problem, we need a feasible point to initiate the searching process and algorithms that can guarantee that the constraints are satisfied. It is suggested that future research in construction scheduling should consider the application of the exact penalty function method for constrained optimization problems BIB001 . The method integrates the constraints into the objective function by using several penalty parameters such that the original constrained optimization problem is transformed into an unconstrained optimization problem. It is shown that if the value of the penalty parameter is sufficiently large, then any local minimizer of the corresponding unconstrained optimization problem is a local minimizer of the original problem BIB001 . With such a transformation, many existing methods can be utilized to deal with the unconstrained optimization problem, which makes the problem much easier to solve. The advantage of heuristic methods is their simplicity. Those well-known heuristic methods are Fondahl's method, Structural model method, Siemens approximation method and structural stiffness method. Due to its simplicity and efficiency, Fondahl's method has been adopted by many commercial project scheduling software. However, most heuristic methods are problem dependent, which makes them difficult to be applied to other projects equivalently. It has also been noticed that most of the current heuristic methods focus on single project scheduling. Only a few of them could deal with multiple projects scheduling problem, for example , which schedules multi-projects subject to cash constraints. It is suggested that approaches focusing on multiple projects scheduling problems subject to multiobjectives and multi-constraints could be a promising future research direction. Metaheuristic methods can solve optimization problems by mimicking certain Nature's processes. The most commonly adopted metaheuristic method is the GA for addressing CSO problems. By introducing in the concept of Pareto optimality, a GA can provide a solution candidate pool for the decision maker. Research has focused on how to improve the performance of GA, for example, to prevent the premature convergence and increase the population diversity. In doing so, some enhanced GAs have begun to emerge by modifying the objective weighting parameters, improving the selection, crossover or even the mutation algorithms. However, an efficient and applicable method, which can choose these parameters adaptively, has not been found. As a result it is suggested that artificial intelligence methods such as machine learning and neural networks can evolve their behaviours based on the example data such that they can be used to develop algorithms automatically. Within the reviewed normative literature, time and cost are commonly considered objectives. Research has focused on minimizing project time and cost so as to achieve maximum profit. However, minimization of time and cost will have an influence on the project quality and risk, which are even more crucial for the successful completion of a construction project. Unfortunately, these key factors have been neglected in most of studies undertaken to date. It is suggested therefore that a multi-objective construction scheduling problem considering both minimization of time-cost-risk and maximization of the quality subject to multiple constraints would be a promising future research topic.
|
Self-Adaptation-Based Dynamic Coalition Formation in a Distributed Agent Network: A Mechanism and a Brief Survey <s> INTRODUCTION <s> Task execution in multi-agent environments may require cooperation among agents. Given a set of agents and a set of tasks which they have to satisfy, we consider situations where each task should be attached to a group of agents that will perform the task. Task allocation to groups of agents is necessary when tasks cannot be performed by a single agent. However it may also be beneficial when groups perform more efficiently with respect to the single agents' performance. In this paper we present several solutions to the problem of task allocation among autonomous agents, and suggest that the agents form coalitions in order to perform tasks or improve the efficiency of their performance. We present efficient distributed algorithms with low ratio bounds and with low computational complexities. These properties are proven theoretically and supported by simulations and an implementation in an agent system. Our methods are based on both the algorithmic aspects of combinatorics and approximation algorithms for NP-hard problems. We first present an approach to agent coalition formation where each agent must be a member of only one coalition. Next, we present the domain of overlapping coalitions. We proceed with a discussion of the domain where tasks may have a precedence order. Finally, we discuss the case of implementation in an open, dynamic agent system. For each case we provide an algorithm that will lead agents to the formation of coalitions, where each coalition is assigned a task. Our algorithms are any-time algorithms, they are simple, efficient and easy to implement. <s> BIB001 </s> Self-Adaptation-Based Dynamic Coalition Formation in a Distributed Agent Network: A Mechanism and a Brief Survey <s> INTRODUCTION <s> 1. Introduction to a Multiagent Perspective V. Lesser, C.L. Ortiz, Jr., M. Tambe. Part I: The Sensor Network Challenge Problem. 2. The Radsim Simulator J.H. Lawton. 3. Challenge Problem Testbed P. Zemany, M. Gaughan. 4. Visualization and Debugging Tools A. Egyed, B. Horling, R. Becker, R. Balzer. 5. Target Tracking with Bayesian Estimation J.E. Vargas, K. Tvalarparti, Zhaojun Wu. Part II: Distributed Resource Allocation: Architectures and Protocols. 6. Dynamic resource-bounded negotiation in non-additive domains C.L. Ortiz, Jr., T.W. Rauenbusch, E. Hsu, R. Vincent. 7. A satisficing, negotiated, and learning coalition formation architecture Leen-Kiat Soh, C. Tsatsoulis, H. Sevay. 8. Using Autonomy, Organizational Design and Negotiation in a DSN B. Horling, R. Mailler, Jiaying Shen, R. Vincent, V. Lesser. 9. Scaling-up Distributed Sensor Networks O. Yadgar, S. Kraus, C.L. Ortiz, Jr. 10. Distributed Resource Allocation P.J. Modi, P. Scerri, Wei-Min Shen, M. Tambe. 11. Distributed Coordination through Anarchic Optimization S. Fitzpatrick, L. Meertens. Part III: Insights into Distributed Resource Allocation Protocols based on Formal Analyses. 12. Communication and Computation in Distributed CSP Algorithms C. Fernandez, R. Bejar, B. Krishnamachari, C. Gomes, B. Selman. 13. A Comparative Study of Distributed Constraint Algorithms Weixiong Zhang, Guandong Wang, Zhao Xing, L. Wittenburg. 14. Analysis of Negotiation Protocols by Distributed Search Guandong Wang, Weixiong Zhang, R. Mailler, V. Lesser. <s> BIB002 </s> Self-Adaptation-Based Dynamic Coalition Formation in a Distributed Agent Network: A Mechanism and a Brief Survey <s> INTRODUCTION <s> We present a distributed approach to self-organization in a distributed sensor network. The agents in the system use a series of negotiations incrementally to form appropriate coalitions of sensor and processing resources.Since the system is cooperative, we have developed a range of protocols that allow the agents to share meta-level information before they allocate resources. On one extreme the protocols are based on local utility computations, where each agent negotiates based on its local perspective. From there, a continuum of additional protocols exists in which agents base decisions on marginal social utility, the combination of an agent's marginal utility and that of others. We present a formal framework that allows us to quantify how social an agent can be in terms of the set of agents that are considered and how the choice of a certain level affects the decisions made by the agents and the global utility of the organization.Our results show that by implementing social agents, we obtain an organization with a high global utility both when agents negotiate over complex contracts and when they negotiate over simple ones. The main difference between the two cases is mainly the rate of convergence. Our algorithm is incremental, and therefore the organization that evolves can adapt and stabilize as agents enter and leave the system. <s> BIB003 </s> Self-Adaptation-Based Dynamic Coalition Formation in a Distributed Agent Network: A Mechanism and a Brief Survey <s> INTRODUCTION <s> We describe the Centibots system, a very large scale distributed robotic system, consisting of more than 100 robots, that has been successfully deployed in large, unknown indoor environments, over extended periods of time (i.e., durations corresponding to several power cycles). Unlike most multiagent systems, the set of tasks about which teams must collaborate is not given a priori. We first describe a task inference algorithm that identifies potential team commitments that collectively balance constraints such as reachability, sensor coverage, and communication access. We then describe a dispatch algorithm for task distribution and management that assigns resources depending on either task density or replacement requirements stemming from failures or power shortages. The targeted deployment environments are expected to lack a supporting communication infrastructure; robots manage their own network and reason about the concomitant localization constraints necessary to maintain team communication. Finally, we present quantitative results in terms of a "search and rescue problem" and discuss the team-oriented aspects of the system in the context of prevailing theories of multiagent collaboration. <s> BIB004 </s> Self-Adaptation-Based Dynamic Coalition Formation in a Distributed Agent Network: A Mechanism and a Brief Survey <s> INTRODUCTION <s> Many multi-agent systems consist of a complex network of autonomous yet interdependent agents. Examples of such networked multi-agent systems include supply chains and sensor networks. In these systems, agents have a select set of other agents with whom they interact based on environmental knowledge, cognitive capabilities, resource limitations, and communications constraints. Previous findings have demonstrated that the structure of the artificial social network governing the agent interactions is strongly correlated with organizational performance. As multi-agent systems are typically embedded in dynamic environments, we wish to develop distributed, on-line network adaptation mechanisms for discovering effective network structures. Therefore, within the context of dynamic team formation, we propose several strategies for agent-organized networks (AONs) and evaluate their effectiveness for increasing organizational performance. <s> BIB005 </s> Self-Adaptation-Based Dynamic Coalition Formation in a Distributed Agent Network: A Mechanism and a Brief Survey <s> INTRODUCTION <s> The spread of the Internet and the evolution of mobile communication, have created new possibilities for software applications such as ubiquitous computing, dynamic supply chains and medical home care. Such systems need to operate in dynamic, heterogeneous environments and face the challenge of handling frequently changing requirements; therefore they must be flexible, robust and capable of adapting to the circumstances. It is widely believed that multi-agent systems coordinated by selforganisation and emergence mechanisms are an effective way to design these systems. This paper aims to define the concepts of self-organisation and emergence and to provide a state of the art survey about the different classes of self-organisation mechanisms applied in the multi-agent systems domain. Furthermore, the strengths and limits of these approaches are examined and research issues are provided. Povzetek: Clanek opisuje pregled samoorganizacije v MAS. <s> BIB006 </s> Self-Adaptation-Based Dynamic Coalition Formation in a Distributed Agent Network: A Mechanism and a Brief Survey <s> INTRODUCTION <s> Individual robots or agents will often need to form coalitions to accomplish shared tasks, e.g., in sensor networks or markets. Furthermore, in most real systems it is infeasible for entities to interact with all peers. The presence of a social network can alleviate this problem by providing a neighborhood system within which entities interact with a reduced number of peers. Previous research has shown that the topology of the underlying social network has a dramatic effect on the quality of coalitions formed and consequently on system performance (Gaston & deslardins 2005a). It has also been shown that it is feasible to develop agents which dynamically alter connections to improve an organization's ability to form coalitions on the network. However those studies have not analysed the network topologies that result from connectivity adaptation strategies. In this paper the resulting network topologies were analysed and it was found that high performance and rapid convergence were attained because scale free networks were being formed. However it was observed that organizational performance is not impacted by limiting the number of links per agent to the total number of skills available within the population. implying that bandwidth was wasted by previous approaches. We used these observations to inform the design of a token based algorithm that attains higher performance using an order of magnitude less messages for both uniform and non-uniform distributions of skills. <s> BIB007 </s> Self-Adaptation-Based Dynamic Coalition Formation in a Distributed Agent Network: A Mechanism and a Brief Survey <s> INTRODUCTION <s> Previous studies of team formation in multi-agent systems have typically assumed that the agent social network underlying the agent organization is either not explicitly described or the social network is assumed to take on some regular structure such as a fully connected network or a hierarchy. However, recent studies have shown that real-world networks have a rich and purposeful structure, with common properties being observed in many different types of networks. As multi-agent systems continue to grow in size and complexity, the network structure of such systems will become increasing important for designing efficient, effective agent communities. ::: ::: We present a simple agent-based computational model of team formation, and analyze the theoretical performance of team formation in two simple classes of networks (ring and star topologies). We then give empirical results for team formation in more complex networks under a variety of conditions. From these experiments, we conclude that a key factor in effective team formation is the underlying agent interaction topology that determines the direct interconnections among agents. Specifically, we identify the property of diversity support as a key factor in the effectiveness of network structures for team formation. Scale-free networks, which were developed as a way to model real-world networks, exhibit short average path lengths and hub-like structures. We show that these properties, in turn, result in higher diversity support; as a result, scale-free networks yield higher organizational efficiency than the other classes of networks we have studied. <s> BIB008 </s> Self-Adaptation-Based Dynamic Coalition Formation in a Distributed Agent Network: A Mechanism and a Brief Survey <s> INTRODUCTION <s> In multi agent system, how to find a coalition structure making the greatest profits in cooperation in the shortest time is an issue which has been given much attention. When finding the optimal coalition structure, if we make no any restriction on the searching space, we must search all the coalition structures. An anytime algorithm—LVAA(Lateral and Vertical Anytime Algorithm), designed in this paper, used a branch and bound technique and pruning function to simplify the searching space besides L 1 , L 2 and L n layers vertically and horizontally. Then, it find the optimal coalition structure value. The result of the experiment proved that it greatly reduced the searching space. When coalition values meet uniform distribution and normal distribution respectively, the searching times of LVAA can be reduced by 78% and 82% than Sandholm's for agent number 18 and 23. <s> BIB009 </s> Self-Adaptation-Based Dynamic Coalition Formation in a Distributed Agent Network: A Mechanism and a Brief Survey <s> INTRODUCTION <s> A major research challenge in multi-agent systems is the problem of partitioning a set of agents into mutually disjoint coalitions, such that the overall performance of the system is optimized. This problem is difficult because the search space is very large: the number of possible coalition structures increases exponentially with the number of agents. Although several algorithms have been proposed to tackle this Coalition Structure Generation (CSG) problem, all of them suffer from being inherently centralized, which leads to the existence of a performance bottleneck and a single point of failure. In this paper, we develop the first decentralized algorithm for solving the CSG problem optimally. In our algorithm, the necessary calculations are distributed among the agents, instead of being carried out centrally by a single agent (as is the case in all the available algorithms in the literature). In this way, the search can be carried out in a much faster and more robust way, and the agents can share the burden of the calculations. The algorithm combines, and improves upon, techniques from two existing algorithms in the literature, namely DCVC [5] and IP [9], and applies novel techniques for filtering the input and reducing the inter-agent communication load. <s> BIB010 </s> Self-Adaptation-Based Dynamic Coalition Formation in a Distributed Agent Network: A Mechanism and a Brief Survey <s> INTRODUCTION <s> The coordination of emergency responders and robots to undertake a number of tasks in disaster scenarios is a grand challenge for multi-agent systems. Central to this endeavour is the problem of forming the best teams (coalitions) of responders to perform the various tasks in the area where the disaster has struck. Moreover, these teams may have to form, disband, and reform in different areas of the disaster region. This is because in most cases there will be more tasks than agents. Hence, agents need to schedule themselves to attempt each task in turn. Second, the tasks themselves can be very complex: requiring the agents to work on them for different lengths of time and having deadlines by when they need to be completed. The problem is complicated still further when different coalitions perform tasks with different levels of efficiency. Given all these facets, we define this as The Coalition Formation with Spatial and Temporal constraints problem (CFSTP). We show that this problem is NP-hard---in particular, it contains the well-known complex combinatorial problem of Team Orienteering as a special case. Based on this, we design a Mixed Integer Program to optimally solve small-scale instances of the CFSTP and develop new anytime heuristics that can, on average, complete 97% of the tasks for large problems (20 agents and 300 tasks). In so doing, our solutions represent the first results for CFSTP. <s> BIB011 </s> Self-Adaptation-Based Dynamic Coalition Formation in a Distributed Agent Network: A Mechanism and a Brief Survey <s> INTRODUCTION <s> Agent-based cloud computing is concerned with the design and development of software agents for bolstering cloud service discovery, service negotiation, and service composition. The significance of this work is introducing an agent-based paradigm for constructing software tools and testbeds for cloud resource management. The novel contributions of this work include: 1) developing Cloudle: an agent-based search engine for cloud service discovery, 2) showing that agent-based negotiation mechanisms can be effectively adopted for bolstering cloud service negotiation and cloud commerce, and 3) showing that agent-based cooperative problem-solving techniques can be effectively adopted for automating cloud service composition. Cloudle consists of 1) a service discovery agent that consults a cloud ontology for determining the similarities between providers' service specifications and consumers' service requirements, and 2) multiple cloud crawlers for building its database of services. Cloudle supports three types of reasoning: similarity reasoning, compatibility reasoning, and numerical reasoning. To support cloud commerce, this work devised a complex cloud negotiation mechanism that supports parallel negotiation activities in interrelated markets: a cloud service market between consumer agents and broker agents, and multiple cloud resource markets between broker agents and provider agents. Empirical results show that using the complex cloud negotiation mechanism, agents achieved high utilities and high success rates in negotiating for cloud resources. To automate cloud service composition, agents in this work adopt a focused selection contract net protocol (FSCNP) for dynamically selecting cloud services and use service capability tables (SCTs) to record the list of cloud agents and their services. Empirical results show that using FSCNP and SCTs, agents can successfully compose cloud services by autonomously selecting services. <s> BIB012
|
A S agent technology becomes more capable and more reliable, multiagent systems have been widely utilized to model real-world applications, such as distributed robotic systems BIB004 , distributed sensor networks BIB002 , supply chain management , cloud computing BIB012 , and grid computing . In many applications of multiagent systems, groups of agents need to dynamically join together in a coalition to complete a complex task, which none of them can complete independently. For example, in a distributed vehicletracking sensor network, to track a vehicle efficiently, at least three sensor agents are required to triangulate the position of a vehicle moving through the region BIB003 . Recently, many efforts have been done on coalition formation in multiagent systems, e.g., BIB001 , BIB009 , BIB010 , BIB011 . There is a common assumption in these studies that an agent network structure is not explicitly modeled or such a structure is based on a fully connected network, namely that an agent can directly communicate with all the agents in the network. BIB004 However, in many real circumstances, particularly in large and distributed environments, it is infeasible for each individual agent to consider all the other agents to form coalitions due to time, communication and computation constraints, such as wireless sensor networks where each node has a limited communication range. To overcome this limitation, several researchers, e.g., BIB005 , BIB007 , imposed a neighborhood network structure among agents and required that agents directly communicate only with their neighbors. It should be noted that the introduction of such a network structure does not imply advantages or disadvantages. Instead, it just demonstrates the reality and constraints in the real world. For example, in some real systems, e.g., wireless sensor networks, due to the communication constraint, sensor nodes can directly communicate only with their neighbors, i.e., one hop nodes. Thus, in such sensor networks, if a node wants to form a coalition with other nodes to execute a task, e.g., several nodes collaboratively monitor a moving target, it is unlikely for the node to find coalition members directly in the whole sensor network but only from its neighbors (or neighbors' neighbors if necessary). Obviously, there are several new challenges, when coalition formation mechanisms are designed in a structured agent network, such as direct communication of agents with their neighbors to find potential coalition members and uncertainty of distant activities. Gaston and desJardins BIB005 , BIB008 , and Glinton et al. BIB007 have made great efforts in this way. They investigated the impact of diverse network structures on coalition formation among agents and found that an underlying network structure does have a crucial effect on the performance of agents coalition formation. The studies done in BIB005 , BIB008 , and BIB007 initiated a new research field in the study of coalition formation, i.e., designing coalition formation mechanisms within explicitly modeled network structures. However, since their research focused on the effect of network structures on coalition formation, the coalition formation mechanisms developed in their research were relatively simple. In their mechanisms BIB005 , BIB008 , BIB007 , an agent can join only one coalition and once a coalition is formed for a task, the coalition is fixed and agents cannot leave the coalition before the task is finished. Against this background, in this paper, we design a coalition formation mechanism in a structured agent network, which is claimed as a main contribution of this paper. The proposed mechanism assumes that there is a network with explicit links between the agents, such that only agents that are linked to each other (directly or indirectly) can form coalitions. Additionally, our coalition formation mechanism incorporates the self-adaptation concept, which enables agents to dynamically adjust their degrees of involvement in coalitions and to join new coalitions, via negotiation, at any time if necessary. The process of self-adaptation in a large-scale and distributed system is of key importance to the performance of the system as a whole and it can be employed in agent networks to improve the cooperative behaviors of agents BIB006 . Compared with most related studies, which do not take a network structure into account, we consider the existence of an underlying network structure. Compared with those related studies, which do consider network structures, our mechanism, by integrating the self-adaptation notion, enables agents to have autonomy and flexibility when agents execute tasks. Our mechanism is elucidated by using distributed task allocation. By employing a general application area, i.e., distributed task allocation, instead of a particular existing system, we can develop a general mechanism that can be potentially applied to a wide variety of applications. The remainder of the paper is organized as follows: Section 2 introduces the agent network model. Section 3 proposes the dynamic coalition formation mechanism. Experimental results and analysis are presented in Section 4 and the paper is concluded in Section 5. The brief survey of current coalition formation studies and the discussion about the difference between our work and these studies are given in the supplementary file, which can be found on the Computer Society Digital Library at http://doi. ieeecomputersociety.org/10.1109/TPDS.2012.213.
|
Self-Adaptation-Based Dynamic Coalition Formation in a Distributed Agent Network: A Mechanism and a Brief Survey <s> The Negotiation Protocol <s> We present a formal model of negotiation between autonomous agents. The purpose of the negotiation is to reach an agreement about the provision of a service by one agent for another. The model defines a range of strategies and tactics that agents can employ to generate initial offers, evaluate proposals and offer counter proposals. The model is based on computationally tractable assumptions, demonstrated in the domain of business process management and empirically evaluated. <s> BIB001 </s> Self-Adaptation-Based Dynamic Coalition Formation in a Distributed Agent Network: A Mechanism and a Brief Survey <s> The Negotiation Protocol <s> Task execution in multi-agent environments may require cooperation among agents. Given a set of agents and a set of tasks which they have to satisfy, we consider situations where each task should be attached to a group of agents that will perform the task. Task allocation to groups of agents is necessary when tasks cannot be performed by a single agent. However it may also be beneficial when groups perform more efficiently with respect to the single agents' performance. In this paper we present several solutions to the problem of task allocation among autonomous agents, and suggest that the agents form coalitions in order to perform tasks or improve the efficiency of their performance. We present efficient distributed algorithms with low ratio bounds and with low computational complexities. These properties are proven theoretically and supported by simulations and an implementation in an agent system. Our methods are based on both the algorithmic aspects of combinatorics and approximation algorithms for NP-hard problems. We first present an approach to agent coalition formation where each agent must be a member of only one coalition. Next, we present the domain of overlapping coalitions. We proceed with a discussion of the domain where tasks may have a precedence order. Finally, we discuss the case of implementation in an open, dynamic agent system. For each case we provide an algorithm that will lead agents to the formation of coalitions, where each coalition is assigned a task. Our algorithms are any-time algorithms, they are simple, efficient and easy to implement. <s> BIB002 </s> Self-Adaptation-Based Dynamic Coalition Formation in a Distributed Agent Network: A Mechanism and a Brief Survey <s> The Negotiation Protocol <s> We present a distributed approach to self-organization in a distributed sensor network. The agents in the system use a series of negotiations incrementally to form appropriate coalitions of sensor and processing resources.Since the system is cooperative, we have developed a range of protocols that allow the agents to share meta-level information before they allocate resources. On one extreme the protocols are based on local utility computations, where each agent negotiates based on its local perspective. From there, a continuum of additional protocols exists in which agents base decisions on marginal social utility, the combination of an agent's marginal utility and that of others. We present a formal framework that allows us to quantify how social an agent can be in terms of the set of agents that are considered and how the choice of a certain level affects the decisions made by the agents and the global utility of the organization.Our results show that by implementing social agents, we obtain an organization with a high global utility both when agents negotiate over complex contracts and when they negotiate over simple ones. The main difference between the two cases is mainly the rate of convergence. Our algorithm is incremental, and therefore the organization that evolves can adapt and stabilize as agents enter and leave the system. <s> BIB003 </s> Self-Adaptation-Based Dynamic Coalition Formation in a Distributed Agent Network: A Mechanism and a Brief Survey <s> The Negotiation Protocol <s> We consider the problem of allocating networked resources in dynamic environment, such as cloud computing platforms, where providers strategically price resources to maximize their utility. Resource allocation in these environments, where both providers and consumers are selfish agents, presents numerous challenges since the number of consumers and their resource demand is highly dynamic. While numerous auction-based approaches have been proposed in the literature, this paper explores an alternative approach where providers and consumers automatically negotiate resource leasing contracts. Since resource demand and supply can be dynamic and uncertain, we propose a distributed negotiation mechanism where agents negotiate over both a contract price and a decommitment penalty, which allows agents to decommit from contracts at a cost. We compare our approach experimentally, using representative scenarios and workloads, to both combinatorial auctions and the fixed-price model used by Amazon's Elastic Compute Cloud, and show that the negotiation model achieves a higher social welfare. <s> BIB004 </s> Self-Adaptation-Based Dynamic Coalition Formation in a Distributed Agent Network: A Mechanism and a Brief Survey <s> The Negotiation Protocol <s> Agent-based cloud computing is concerned with the design and development of software agents for bolstering cloud service discovery, service negotiation, and service composition. The significance of this work is introducing an agent-based paradigm for constructing software tools and testbeds for cloud resource management. The novel contributions of this work include: 1) developing Cloudle: an agent-based search engine for cloud service discovery, 2) showing that agent-based negotiation mechanisms can be effectively adopted for bolstering cloud service negotiation and cloud commerce, and 3) showing that agent-based cooperative problem-solving techniques can be effectively adopted for automating cloud service composition. Cloudle consists of 1) a service discovery agent that consults a cloud ontology for determining the similarities between providers' service specifications and consumers' service requirements, and 2) multiple cloud crawlers for building its database of services. Cloudle supports three types of reasoning: similarity reasoning, compatibility reasoning, and numerical reasoning. To support cloud commerce, this work devised a complex cloud negotiation mechanism that supports parallel negotiation activities in interrelated markets: a cloud service market between consumer agents and broker agents, and multiple cloud resource markets between broker agents and provider agents. Empirical results show that using the complex cloud negotiation mechanism, agents achieved high utilities and high success rates in negotiating for cloud resources. To automate cloud service composition, agents in this work adopt a focused selection contract net protocol (FSCNP) for dynamically selecting cloud services and use service capability tables (SCTs) to record the list of cloud agents and their services. Empirical results show that using FSCNP and SCTs, agents can successfully compose cloud services by autonomously selecting services. <s> BIB005
|
To operate the coalition formation mechanism, we need another important component, i.e., a negotiation protocol. The coalition formation problem can be modeled as a negotiation process between an Initiator and a Participant, where an Initiator acts as a buyer and a Participant plays as a seller. The negotiation focuses on a single issue, i.e., the DoI of a Participant into a coalition, which is being formed by an Initiator. Two constraints are listed as follows, with which each agent should comply: 1. The DoI of an Initiator in its initiated coalition is postulated to be 1 and cannot be adapted. BIB005 by allowing an agent to make multiple agreements with other agents and to cancel temporary agreements without paying penalty. The alternating offers protocol has been widely used for bilateral bargaining (e.g., An et al. BIB004 ). Other more complex negotiation protocols may be also available for our problem, e.g., , , but based on our investigation, the alternating offers protocol is powerful enough for our problem and it is easy to implement. It should be noted that the main contribution of this paper is the idea of integrating the selfadaptation notion into dynamic coalition formation rather than this negotiation protocol used only for realizing the self-adaptation notion. There are four possible actions that a buyer (Initiator) and a seller (Participant) can take are as follows: . offer½o, where o is buyer's offer to a seller. An offer is determined by four factors, which are the pressure of deadline, the payment of the resource paid by the buyer to the seller, the duration of using the resource, and the demand/supply ratio of the buyer's required resource. . accept½o. When a seller receives an offer o, it can accept the offer, which results in a temporary agreement made with the buyer. . counter offer½o 0 . If a seller is not happy with an offer o, it can send back a counter-offer o 0 for its available resource. A counter-offer o 0 is determined by three aspects, which include the current state of the seller, e.g., whether it has joined other coalitions and the degrees of involvement into those coalitions, the payment received by the seller from the buyer, and the demand/supply ratio of the seller's available resource. . cancel½o. After a temporary agreement is achieved by a buyer and a seller, any one of them can cancel the temporary agreement without paying penalty. A final agreement, however, cannot be canceled by either a buyer or a seller. The negotiation protocol, displayed in Line 8 of Algorithm 2, is shown in Algorithm 3 as follows. Algorithm 3: Negotiateða i ; a j Þ n* a i is the buyer and a j is the seller * n begin: 1: while t < predefined period do n* t is the real time * n 2: a pe ¼ Á pay; ð4Þ In (1), pay is the intended payment made by a i to a j , pðr aj Þ is the maximum payment that a i can endure for the required resource r aj , A T ða i Þ is the set of temporary agreements achieved by a i for the resource r aj , and jA T ða i Þj denotes the number of temporary agreements in the set A T ða i Þ. An Initiator can get benefit from the task if and only if all the subtasks of the task are completed, while a Participant can obtain the relevant payment when it finishes the assigned subtask. Here, a subtask corresponds to a resource requirement of a task. In (2), DoI u is the upper bound DoI in the coalition, which a i wants a j to join for task . It can be found that a i 's expected DoI u value of a j in a i 's coalition decreases with the increase of payment from a i to a j , which seems somewhat weird. This is caused by a i 's concession strategy, as a i 's deadline is approaching. Such time-dependent concession strategies have been broadly used in the literature (e.g., Faratin et al. BIB001 ). In (3), DoI l is the lower bound DoI about a j in a i 's coalition, which means that a j cannot reduce its DoI value in a i 's coalition less than DoI l . is a coefficient, which is a positive integer. Thus, if a j accepts an offer from a i , its original DoI in a i 'coalition is DoI u . a j is able to decrease its DoI value in a i ' coalition later, but the DoI value must not be less than DoI l . In (4), , where 0 < < 1, is a coefficient and pe is the total penalty if a j wants to reduce the DoI in a i 's coalition from the upper bound to the lower bound, i.e., from DoI u to DoI l . The exact penalty a j should pay to a i , recorded as pe j!i , is based on the extent that a j wants to lessen the DoI in a i 's coalition. Specifically, pe j!i can be calculated by using BIB003 , where DoI 0 is the current DoI and DoI 00 is the In (5), pd indicates the period, during which the required resource is needed. After receiving an offer o from a i , in Line 3, a j evaluates whether the offer o is acceptable. This evaluation is based on how much revenue a j could get BIB002 . In (7), the cost of a j depends on its DoI in a i 's coalition and how long its resource will be used by a i . The notation pe j!k means the penalty that a j has to pay other Initiators if a j wants to lower its DoI values in their coalitions. If rv is greater than a predefined threshold, a j will accept the offer o and a temporary agreement is achieved (Lines 4 and 5). Otherwise, a j generates a counter-offer o 0 to a i (Line 9). The elements, which consist of a counter-offer o 0 , are the same as those in an offer o. Since the negotiation issue is only the DoI as described earlier, a j will adjust only its DoI value in o to meet its predefined threshold revenue via (7) and will create a counter-offer o 0 with the newly calculated DoI value. a i then will evaluate the counter-offer o 0 by comparing the DoI value in o 0 with its reserved DoI value. If the DoI value in o 0 is greater than its reserved DoI value, o 0 is acceptable and a temporary agreement is achieved (Lines 11 and 12) . Otherwise, a i continues to go to the next negotiation round (Line 16) until the predefined negotiation period is reached. When all the resource requirements are satisfied, the Initiator, i.e., the buyer, a i will select the most valuable (the least payment) temporary agreement and cancel other temporary agreements. For the Participant, i.e., the seller, a j can execute several tasks simultaneously with the summation of DoI values equal to or less than 1.
|
Self-Adaptation-Based Dynamic Coalition Formation in a Distributed Agent Network: A Mechanism and a Brief Survey <s> Experimental Setup <s> We present a distributed approach to self-organization in a distributed sensor network. The agents in the system use a series of negotiations incrementally to form appropriate coalitions of sensor and processing resources.Since the system is cooperative, we have developed a range of protocols that allow the agents to share meta-level information before they allocate resources. On one extreme the protocols are based on local utility computations, where each agent negotiates based on its local perspective. From there, a continuum of additional protocols exists in which agents base decisions on marginal social utility, the combination of an agent's marginal utility and that of others. We present a formal framework that allows us to quantify how social an agent can be in terms of the set of agents that are considered and how the choice of a certain level affects the decisions made by the agents and the global utility of the organization.Our results show that by implementing social agents, we obtain an organization with a high global utility both when agents negotiate over complex contracts and when they negotiate over simple ones. The main difference between the two cases is mainly the rate of convergence. Our algorithm is incremental, and therefore the organization that evolves can adapt and stabilize as agents enter and leave the system. <s> BIB001 </s> Self-Adaptation-Based Dynamic Coalition Formation in a Distributed Agent Network: A Mechanism and a Brief Survey <s> Experimental Setup <s> Individual robots or agents will often need to form coalitions to accomplish shared tasks, e.g., in sensor networks or markets. Furthermore, in most real systems it is infeasible for entities to interact with all peers. The presence of a social network can alleviate this problem by providing a neighborhood system within which entities interact with a reduced number of peers. Previous research has shown that the topology of the underlying social network has a dramatic effect on the quality of coalitions formed and consequently on system performance (Gaston & deslardins 2005a). It has also been shown that it is feasible to develop agents which dynamically alter connections to improve an organization's ability to form coalitions on the network. However those studies have not analysed the network topologies that result from connectivity adaptation strategies. In this paper the resulting network topologies were analysed and it was found that high performance and rapid convergence were attained because scale free networks were being formed. However it was observed that organizational performance is not impacted by limiting the number of links per agent to the total number of skills available within the population. implying that bandwidth was wasted by previous approaches. We used these observations to inform the design of a token based algorithm that attains higher performance using an order of magnitude less messages for both uniform and non-uniform distributions of skills. <s> BIB002
|
To objectively exhibit the effectiveness of our coalition formation mechanism, named self-adaptation coalition formation (SACF), we compare our mechanism with three other mechanisms, i.e., Centralized mechanism (CM), classic coalition formation (CCF) BIB002 mechanism, and flexible coalition formation (FCF) mechanism. 1. CM. This is an ideal centralized coalition formation mechanism, in which there is an external omniscient central manager that maintains information about all the agents and tasks. The central manager is able to interact with all the agents in the network without cost. When an agent has a complex task to be completed, it simply requests the central manager to seek for the most suitable agents in the network, which could fulfill the task and form coalitions with those agents. This method is not practical or robust, but it can be used as an upper bound of the performance in our experiment. 2. CCF mechanism. This mechanism was proposed by Glinton et al. BIB002 , which does not enable agents to either dynamically adjust their degrees of involvement in a coalition or autonomously join multiple coalitions. These two behaviors, i.e., adjusting degrees of involvement and joining multiple coalitions, are self-organizing behaviors. Thus, it can be conceived that CCF does not integrate self-adaptation. BIB001 Through the comparison with CCF, the significance of integrating self-adaptation into coalition formation could be revealed. 3. FCF mechanism. This mechanism is created by us, which is a simplified version of SACF. This mechanism allows agents to breach a contract and leave a coalition by paying penalty to the coalition leader, i.e., Initiator. Agents, however, cannot partially breach a contract. Therefore, agents can join only one coalition at any time step. By comparing with this mechanism, the importance of the notion, i.e., the DoI, can be exposed. In the agent network, each agent is randomly assigned a single resource r a 2 ½1; ". For simplicity, tasks are created by randomly generating resource requirements (RðÞ). The number of resources required for a given task is chosen uniformly from ½1; ", as a result jRðÞj ". Then, each r l 2 RðÞ is selected randomly from ½1; ". In addition, at each time step, a task arrives at the agent network with a probability . The required time to complete a task (P DðÞ) is a random positive integer and the latest start time of a task (DLðÞ) is also a random positive integer, which must be greater than the task arrival time (AT ðÞ). The task is then randomly assigned to an IDLE agent for allocation as described in Section 3. Finally, the evaluation criteria consist of P rofit Net , which is the summation of each individual agent's profit, and time consumed by these mechanisms. P rofit Net can be calculated by using In (8), each agent's profit is the difference between its income and payout. An agent's, say a i 's, income consists of the reward, which is obtained when it completes tasks (reward i ), and the penalty, which is received from other agents when they change their degrees of involvement in that agent's coalition (recP enalty i ). An agent's payout is the penalty it pays to the coalition leaders, i.e., the Initiators, when it changes its degrees of involvement in their coalitions (penalty i ). jAj denotes the number of agents in the network. As this paper mainly focuses on theoretical study, we just use a general programming language, i.e., JAVA, to simulate the proposed mechanism. In the experiment, "agent" is initially programmed as a class and then all agents are the objects of this class, and resources are simply represented as integers. Moreover, the experiment is run on Windows XP SP3 operating system with Intel Core 2 Duo 3 GHz CPU and 2 GB RAM. The experimental results are obtained by averaging 100 runs. For clarity, the values of parameters, which are exploited in these experiments and their meanings are listed in Table 1 . These values were chosen experimentally to provide the best results.
|
A Survey on Modeling Language Evolution in the New Millennium <s> Introduction <s> This paper surveys work on the computational modeling of the origins and evolution of language. The main approaches are described and some example experiments from the domains of the evolution of communication, phonetics, lexicon formation, and syntax are discussed. <s> BIB001 </s> A Survey on Modeling Language Evolution in the New Millennium <s> Introduction <s> This paper describes a model for explaining the emergence and the universal structural tendencies of vowel systems. Both are considered as the result of self-organisation in a population of language users. The language users try to imitate each other and to learn each other’s vowel systems as well as possible under constraints of production and perception, while at the same time maximising the number of available speech sounds. It is shown through computer simulations that coherent and natural sound systems can indeed emerge in populations of artificial agents. It is also shown that the mechanism that is responsible for the emergence of sound systems can be used for learning existing sound systems as well. Finally, it is argued that the simulation of agents that can only produce isolated vowels is not enough. More complex utterances are needed for other interesting universals of sound systems and for explaining realistic sound change. Work in progress on implementing agents that can produce and perceive complex utterances is reported. <s> BIB002 </s> A Survey on Modeling Language Evolution in the New Millennium <s> Introduction <s> Why is language the way it is? How did language come to be this way? And why is our species alone in having complex language? These are old unsolved questions that have seen a renaissance in the dramatic recent growth in research being published on the origins and evolution of human language. This review provides a broad overview of some of the important current work in this area. We highlight new methodologies (such as computational modeling), emerging points of consensus (such as the importance of pre-adaptation), and the major remaining controversies (such as gestural origins of language). We also discuss why language evolution is such a difficult problem, and suggest probable directions research may take in the near future. Language is one of the hallmarks of the human species – an important part of what makes us human. Yet, despite a staggering growth in our scientific knowledge about the origin of life, the universe and (almost) everything else that we have seen fit to ponder, we know comparatively little about how our unique ability for language originated and evolved into the complex linguistic systems we use today. Why might this be? When Charles Darwin published his book, The Origin of Species, in 1859 there was already a great interest in the origin and evolution of language. A plethora of ideas and conjectures flourished but with few hard constraints to limit the realm of possibility, the theorizing became plagued by outlandish speculations. By 1866 this situation had deteriorated to such an extent that the influential <s> BIB003 </s> A Survey on Modeling Language Evolution in the New Millennium <s> Introduction <s> In this article I provide a review of studies that have modeled interactions between language evolution and demographic processes. The models are classified in terms of three different approaches: analytical modeling, agent-based analytical modeling, and agent-based cognitive modeling. I show that these approaches differ in the complexity of interactions that they can handle and that the agent-based cognitive models allow for the most detailed and realistic simulations. Thus readers are provided with a guideline for selecting which approach to use for a given problem. The analytical models are useful for studying interactions between demography and language evolution in terms of high-level processes; the agent-based analytical models are good for studying such interactions in terms of social dynamics without bothering too much about the cognitive mechanisms of language processing; and the agent-based cognitive models are best suited for the study of the interactions between the complex sociocognitive mechanisms underlying language evolution. <s> BIB004 </s> A Survey on Modeling Language Evolution in the New Millennium <s> Introduction <s> The interest in language evolution by various disciplines, such as linguistics, computer science, biology, etc., makes language evolution models an active research topic and many models have been defined in the last decade. In this work, an overview of computational methods and grammars in language evolution models is given. It aims to introduce readers to the main concepts and the current approaches in language evolution research. Some of the language evolution models, developed during the decade 2003---2012, have been described and classified considering both the grammatical representation (context-free, attribute, Christiansen, fluid construction, or universal grammar) and the computational methods (agent-based, evolutionary computation-based or game theoretic). Finally, an analysis of the surveyed models has been carried out to evaluate their possible extension towards multimodal language evolution. <s> BIB005
|
Over the past years, the fascinating question "How did human language evolve?" has received a lot of answers from the research community. Several contributions have been proposed by researchers of many disciplines, ranging from anthropology and biology to linguistics, psychology, and computer science. In this interdisciplinary perspective of the language evolution study, anthropologists were mainly devoted to investigating how the nature of language and its functions are evolved and how this evolution has influenced other aspects of cultural life (e.g., interactions within societies, social identity, and group membership); biologists studied the evolution of language starting from the fundamentals of the language faculty and focusing on the distinctive behavioral mechanisms that enable the emergence of language; linguists focused on the language properties (e.g., morphemes, words, syntactic patterns, semantic structures, etc.) that can emerge and co-evolve within cultures; psychologists mainly addressed the mental processes and structures underlying the language use and evolution; computer scientists mainly aimed to better understand how specific computational mechanisms affect the outcome of observed linguistic phenomena. These perspectives are strongly connected with the representation of the complexity of the phenomenon of language evolution. Language, indeed, is a complex and non-linear dynamic system BIB001 and it is not a trivial task to provide a formal representation of the dynamics of processes occurring during its evolution. To this aim, computational modeling has become fundamental for investigating and simulating the behavior and long-term dynamics of human language BIB003 BIB002 . Computational modeling has been notably applied in the new millennium giving rise to a great amount of language evolution models. These models have been surveyed by several authors BIB005 BIB004 in the literature. To advance the field of language evolution modeling, it is useful to consider the new developments by carrying out a bibliographic analysis of the most relevant models developed in this new millennium. Due to the ongoing interest in this research topic, we think that such an analysis is valuable to many researchers to reveal the developments in the field and to plan future research directions. Therefore, the goal of the paper is to analyze bibliographic production and scientific impact of these language evolution models and the future trends and perspectives of this research field. In this analysis, we adopt the classification of language evolution models proposed by Grifoni et al. BIB005 and based on the computational method (agent-based, evolutionary computation-based, and game-theoretic models) and the grammatical formalism (context-free grammar-based, attribute grammarbased, Christiansen grammar-based, fluid construction grammar-based, and universal grammar-based models). We extended the analysis for papers in the period 2001-2017, related to the models identified. Specifically, we started from the ten language evolution models surveyed by Grifoni et al. BIB005 , and we observed their bibliographic production for identifying computational methods and grammatical formalisms with the highest scientific impact over the years. Moreover, we discuss the strategies for validating the language evolution models usually applied in the literature and we give some results obtained by the authors of the models during their evaluation. Finally, we outline the most promising directions in this research field resulting from a brief interview with the authors of the surveyed models, the future work section of the surveyed papers, and the literature. The remainder of the paper is organized as follows. The section "Analyzed Language Evolution Models" gives some information about language evolution models developed in the years 2001-2017, categorizing them according to grammatical representations and computational approaches. In the section "Bibliographic Production of Language Evolution Models", an analysis of bibliographic production and citations of language evolution models is provided. The section "Validation of Language Evolution Models" overviews the validation strategies, usually applied in language evolution, and describes how they have been applied to evaluate and compare the surveyed language evolution models. In the section "Trends and Future Perspectives, a discussion about future trends and perspectives of language evolution models is given. The section "Conclusions" concludes the paper.
|
A Survey on Modeling Language Evolution in the New Millennium <s> Analyzed Language Evolution Models <s> The modern theory of evolutionary dynamics is founded upon the remarkable insights of R. A. Fisher and Sewall Wright and set forth in the loci classici The Genetical Theory of Natural Selection (1930) and ‘Evolution in Mendelian Populations’ (1931). By the time of the publication of Wright’s paper in 1931 all of the theory of population genetics, as it is presently understood, was established. It is a sign of the extraordinary power of these early formulations, that nothing of equal significance has been added to the theory of population genetics in the thirty years that have passed since that time. Yet we cannot take this period to mean that we now have an adequate theory of evolutionary dynamics. On the contrary, the theory of population genetics, as complete as it may be in itself, fails to deal with many problems of primary importance for an understanding of evolution. <s> BIB001 </s> A Survey on Modeling Language Evolution in the New Millennium <s> Analyzed Language Evolution Models <s> A computationally implemented model of the transmission of linguistic behavior over time is presented. In this iterated learning model (ILM), there is no biological evolution, natural selection, nor any measurement of the success of the agents at communicating (except for results-gathering purposes). Nevertheless, counter to intuition, significant evolution of linguistic behavior is observed. From an initially unstructured communication system (a protolanguage), a fully compositional syntactic meaning-string mapping emerges. Furthermore, given a nonuniform frequency distribution over a meaning space and a production mechanism that prefers short strings, a realistic distribution of string lengths and patterns of stable irregularity emerges, suggesting that the ILM is a good model for the evolution of some of the fundamental features of human language. <s> BIB002 </s> A Survey on Modeling Language Evolution in the New Millennium <s> Analyzed Language Evolution Models <s> In this paper, an agent-based evolutionary computing technique is introduced, that is geared towards the automatic induction and optimization of grammars for natural language (grael). We outline three instantiations of the grael-environment: the grael-1 system uses large annotated corpora to bootstrap grammatical structure in a society of autonomous agents, that tries to optimally redistribute grammatical information to reflect accurate probabilistic values for the task of parsing. In grael-2, agents are allowed to mutate grammatical information, effectively implementing grammar rule discovery in a practical context. Finally, by employing a separate grammar induction module at the onset of the society, grael-3 can be used as an unsupervised grammar induction technique. <s> BIB003 </s> A Survey on Modeling Language Evolution in the New Millennium <s> Analyzed Language Evolution Models <s> From the Publisher: ::: Grammatical Evolution: Evolutionary Automatic Programming in an Arbitrary Language provides the first comprehensive introduction to Grammatical Evolution, a novel approach to Genetic Programming that adopts principles from molecular biology in a simple and useful manner, coupled with the use of grammars to specify legal structures in a search. Grammatical Evolution's rich modularity gives a unique flexibility, making it possible to use alternative search strategies - whether evolutionary, deterministics or some other approach - and to radically change its behavior by merely changing the grammar supplied. This approach to Genetic Programming represents a powerful new weapon in the Machine Learning toolkit that can be applied to a diverse set of problem domains. <s> BIB004 </s> A Survey on Modeling Language Evolution in the New Millennium <s> Analyzed Language Evolution Models <s> Language is culturally transmitted. Iterated learning, the process by which the output of one individual's learning becomes the input to other individuals' learning, provides a framework for investigating the cultural evolution of linguistic structure. We present two models, based upon the iterated learning framework, which show that the poverty of the stimulus available to language learners leads to the emergence of linguistic structure. Compositionality is language's adaptation to stimulus poverty. <s> BIB005 </s> A Survey on Modeling Language Evolution in the New Millennium <s> Analyzed Language Evolution Models <s> This study examines the possibility of evolving the grammar that Grammatical Evolution uses to specify the construction of a syntactically correct solution. As the grammar dictates the space of symbols that can be used in a solution, its evolution represents the evolution of the genetic code itself. Results provide evidence to show that the co-evolution of grammar and genetic code with a solution using grammatical evolution is a viable approach. <s> BIB006 </s> A Survey on Modeling Language Evolution in the New Millennium <s> Analyzed Language Evolution Models <s> Since many domains are constantly evolving, the associated domain specific languages (DSL) inevitably have to evolve too, to retain their value. But the evolution of a DSL can be very expensive, since existing words of the language (i.e. programs) and tools have to be adapted according to the changes of the DSL itself. In such cases, these costs seriously limit the adoption of DSLs. This paper presents Lever, a tool for the evolutionary development of DSLs. Lever aims at making evolutionary changes to a DSL much cheaper by automating the adaptation of the DSL parser as well as existing words and providing additional support for the correct adaptation of existing tools (e.g. program generators). This way, Lever simplifies DSL maintenance and paves the ground for bottom-up DSL development. <s> BIB007 </s> A Survey on Modeling Language Evolution in the New Millennium <s> Analyzed Language Evolution Models <s> This paper describes Christiansen grammar evolution (CGE), a new evolutionary automatic programming algorithm that extends standard grammar evolution (GE) by replacing context-free grammars by Christiansen grammars. GE only takes into account syntactic restrictions to generate valid individuals. CGE adds semantics to ensure that both semantically and syntactically valid individuals are generated. It is empirically shown that our approach improves GE performance and even allows the solution of some problems are difficult to tackle by GE <s> BIB008 </s> A Survey on Modeling Language Evolution in the New Millennium <s> Analyzed Language Evolution Models <s> We investigate a model of language evolution, based on population game dynamics with learning. First, we examine the case of two genetic variants of universal grammar (UG), the heart of the human language faculty, assuming each admits two possible grammars. The dynamics are driven by a communication game. We prove using dynamical systems techniques that if the payoff matrix obeys certain constraints, then the two UGs are stable against invasion by each other, that is, they are evolutionarily stable. Then, we prove a similar theorem for an arbitrary number of disjoint UGs. In both theorems, the constraints are independent of the learning process. Intuitively, if a mutation in UG results in grammars that are incompatible with the established languages, then the mutation will die out because mutants will be unable to communicate and therefore unable to realize any potential benefit of the mutation. An example for which these theorems do not apply shows that compatible mutations may or may not be able to invade, depending on the population's history and the learning process. These results suggest that the genetic history of language is constrained by the need for compatibility and that mutations in the language faculty may have died out or taken over due more to historical accident than to any straightforward notion of relative fitness. <s> BIB009 </s> A Survey on Modeling Language Evolution in the New Millennium <s> Analyzed Language Evolution Models <s> This article deals with the typology of the case marking of semantic core roles. The competing economy considerations of hearer (disambiguation) and speaker (minimal effort) are formalized in terms of EVOLUTIONARY GAME THEORY. It is shown that the case-marking patterns that are attested in the languages of the world are those that are evolutionarily stable for different relative weightings of speaker economy and hearer economy, given the statistical patterns of language use that were extracted from corpora of naturally occurring conversations. <s> BIB010 </s> A Survey on Modeling Language Evolution in the New Millennium <s> Analyzed Language Evolution Models <s> This paper describes and tests the utility of a meta grammar approach to grammatical evolution (GE). Rather than employing a fixed grammar as is the case with canonical GE, under a meta grammar approach the grammar that is used to specify the construction of a syntactically correct solution is itself allowed to evolve. The ability to evolve a grammar in the context of GE means that useful bias towards specific structures and solutions can be evolved and directly incorporated into the grammar during a run. This approach facilitates the evolution of modularity and reuse both on structural and symbol levels and consequently could enhance both the scalability of GE and its adaptive potential in dynamic environments. In this paper an analysis of the extent that building block structures created in the grammars are used in the solution is undertaken. It is demonstrated that building block structures are incorporated into the evolving grammars and solutions at a rate higher than would be expected by random search. Furthermore, the results indicate that grammar design can be an important factor in performance. <s> BIB011 </s> A Survey on Modeling Language Evolution in the New Millennium <s> Analyzed Language Evolution Models <s> We defined FCGlight, a refined version of the Fluid Construction Grammar (FCG), which is a formalism for studying the evolution of the natural language. We picked a core subset of FCG, and expressed it in the semantic framework of the Order-Sorted Features (OSF) logic. This allows for efficient processing, and also gives FCG a solid formal background for further analysis and improvement. Inspired from the conception of LIGHT[5], a system for natural language processing with large scale unification grammars, we developed a prototype system which implements FCGlight and can conduct language evolution experiments in a multi-agent population. We proved the functionalities of this system by running a experiment which models the evolution of the Russian verb aspect. <s> BIB012 </s> A Survey on Modeling Language Evolution in the New Millennium <s> Analyzed Language Evolution Models <s> In this paper, we show an application of Adaptable Grammars to language evolution. An adaptable grammar may be defined as a logically based transformational grammar formalism in which the grammar itself may be affected in a derivation step. This grammar formalism was originally intended for describing software systems and programming languages. For the field of natural language analysis, the main advantage of adaptable grammars over other types of formal grammars is the idea of evolution. Adaptable grammars are dynamic entities in which novelties appearing in lexical units or language structure can create new or modify existing grammar rules. Taking into account this idea of ‘dynamicity’, we suggest the possibility of applying adaptable grammars to natural language change. <s> BIB013 </s> A Survey on Modeling Language Evolution in the New Millennium <s> Analyzed Language Evolution Models <s> Considering the adequacy of agent systems for the simulation of language evolution, we introduce a formal-language-theoretic multi-agentmodel based on grammar systems that may account for language change: cultural grammar systems. The framework we propose is a variant of the so-called eco-grammar systems. We modify this formal model, by adding new elements and relationships, in order to obtain a new machinery to describe the dynamics of the evolution of language. <s> BIB014 </s> A Survey on Modeling Language Evolution in the New Millennium <s> Analyzed Language Evolution Models <s> We study evolutionary game theory in a setting where individuals learn from each other. We extend the traditional approach by assuming that a population contains individuals with different learning abilities. In particular, we explore the situation where individuals have different search spaces, when attempting to learn the strategies of others. The search space of an individual specifies the set of strategies learnable by that individual. The search space is genetically given and does not change under social evolutionary dynamics. We introduce a general framework and study a specific example in the context of direct reciprocity. For this example, we obtain the counter intuitive result that cooperation can only evolve for intermediate benefit-to-cost ratios, while small and large benefit-to-cost ratios favor defection. Our paper is a step toward making a connection between computational learning theory and evolutionary game dynamics. <s> BIB015 </s> A Survey on Modeling Language Evolution in the New Millennium <s> Analyzed Language Evolution Models <s> The interest in language evolution by various disciplines, such as linguistics, computer science, biology, etc., makes language evolution models an active research topic and many models have been defined in the last decade. In this work, an overview of computational methods and grammars in language evolution models is given. It aims to introduce readers to the main concepts and the current approaches in language evolution research. Some of the language evolution models, developed during the decade 2003---2012, have been described and classified considering both the grammatical representation (context-free, attribute, Christiansen, fluid construction, or universal grammar) and the computational methods (agent-based, evolutionary computation-based or game theoretic). Finally, an analysis of the surveyed models has been carried out to evaluate their possible extension towards multimodal language evolution. <s> BIB016 </s> A Survey on Modeling Language Evolution in the New Millennium <s> Analyzed Language Evolution Models <s> The well-established framework of evolutionary dynamics can be applied to the fascinating open problems how human brains are able to acquire and adapt language and how languages change in a population. Schemas for handling grammatical constructions are the replicating unit. They emerge and multiply with variation in the brains of individuals and undergo selection based on their contribution to needed expressive power, communicative success and the reduction of cognitive effort. Adopting this perspective has two major benefits. (i) It makes a bridge to neurobiological models of the brain that have also adopted an evolutionary dynamics point of view, thus opening a new horizon for studying how human brains achieve the remarkably complex competence for language. And (ii) it suggests a new foundation for studying cultural language change as an evolutionary dynamics process. The paper sketches this novel perspective, provides references to empirical data and computational experiments, and points to open problems. <s> BIB017
|
Many researchers of language evolution, mainly linguists and computer scientists, have paid considerable attention to understanding how the evolution of language can be computationally represented through a formal model . In the 17 years between 2001 and 2017, several models of language evolution have been produced. In this survey, we aim at analyzing trends of language evolution models developed in the new millennium. The analyzed models are taken starting from the survey of Grifoni et al. BIB016 that classifies these models according to a twofold point of view, representational and computational, and extended for the years until 2017. The representational point of view investigates the grammatical representations that are used in language evolution models to represent the language. We are, therefore, interested in understanding how linguistic knowledge can be represented in formal computational models of human-language evolution. According to this point of view, language evolution models have been classified in the following five main categories: context-free grammar-based (CFG-based), attribute grammar-based (AGbased), Christiansen grammar-based (CG-based), fluid construction grammar-based (FCG-based), and universal grammar-based (UG-based). Although there are also non-grammatical models in the literature, here, we focused only on language evolution models that have a grammatical representation, by maintaining the perspective introduced in the survey of Grifoni et al. BIB016 . The reason for that lies primarily in the growing interest in the literature investigating the emergence of grammars and in the use of sophisticated and more realistic grammatical representations underlying the evolution of language BIB017 . In our current research, we are particularly interested in investigating the emergence of grammars as we are going to develop new models based on grammatical representations. The computational point of view investigates the computational methods that are used to process language evolution. According to this point of view, language evolution models have been classified into three main categories that are: agent-based, evolutionary computation-based, and game theoretic. A brief description of the categories of language evolution models discussed in the paper is given in Online Resource 1. The adopted classification is depicted in Fig. 1 . Hereafter, we rely on this classification when we refer to the categories of language evolution models. Table 1 summarizes the analyzed models according to the provided classification. Note that blank boxes in the table represent possibilities to combine computational methods and grammatical representations unexplored by the current literature, and could be the scope of future research works. In the remainder of this section, we provide some generic information about these models. GRAEL (GRAmmar EvoLution) Framework BIB003 provides an evolutionary computing approach to natural language grammar optimization and induction. GRAEL works with a population of agents, each of which holds a set of linguistic structures, represented using a CFG formalism that allows formulating sentences and analyzing other agents' sentences. LEVER (Language Evolver) BIB007 provides a tool for the evolutionary development and adaptation of the syntax, parser, and vocabulary of domain-specific languages (DSLs). LEVER uses attribute grammars as specification formalism for both syntax and semantics of a DSL. Grammatical evolution by grammatical evolution (GE) 2 BIB011 provides an evolutionary computing approach in which an input grammar, expressed using CFG notation, is used to specify the construction of another syntactically correct grammar. Attribute Grammar Evolution (AGE) is an evolutionary computation approach that extends the grammatical evolution, proposed by O'Neill and Ryan BIB004 , by using AGs instead of CFGs. Context-free grammar Cultural grammar system (CGS) BIB014 GRAmmar EvoLution (GRAEL) GRAmmar EvoLution (GRAEL) Grammatical evolution by grammatical evolution (GE) 2 BIB006 Attribute grammar Language Evolver (LEVER) BIB007 Attribute Grammar Evolution (AGE) Christiansen grammar Christiansen Grammar Evolution (CGE) BIB008 Fluid construction grammar FCGlight BIB012 Universal grammar Iterated Learning Model (ILM) BIB005 Game dynamics (GD) BIB009 Evolutionary game theory (EGT) BIB010 Christiansen Grammar Evolution (CGE) BIB013 BIB008 is an automatic modeling tool that extends the grammatical evolution, proposed by O'Neill and Ryan BIB004 , by using CGs instead of CFGs. FCGlight BIB012 provides a framework for studying the evolution of natural language. It is based on FCGs, which allow the grammar to change, and on multi-agent language games. Cultural Grammar Systems (CGS) BIB014 is a framework to formalize cultural dynamics of language evolution. It provides a syntactical framework based on CFGs and a population of agents. Game Dynamics (GD) BIB015 BIB009 is a model of language evolution based on a mixed population, where each member has a genetically determined universal grammar and learns to speak one new grammar. The dynamics of language evolution is driven by a communication game. Iterated Learning Model (ILM) BIB005 is a tool for investigating the cultural evolution of language. As suggested by the name, ILM applies iterated learning BIB002 to a population of agents that try to reconstruct a universal grammar through an inference process based on observation. Evolutionary Game Theory (EGT) BIB001 was first developed by a theoretical biologist, Maynard Smith, in 1982. EGT has been applied to study language evolution by several authors, such as Jäger BIB010 . As suggested by the name, EGT relies on game theory for modeling the evolution of language structure, formalized using universal grammars.
|
A Survey on Modeling Language Evolution in the New Millennium <s> Bibliographic Production of Language Evolution Models <s> This study was to explore a bibliometric approach to quantitatively assessing current research trends on atmospheric aerosol, using the related literature in the Science Citation Index (SCI) database from 1991 to 2006. Articles were concentrated on the analysis by scientific output, research performances by individuals, institutes and countries, and trends by the frequency of keywords used. Over the years, there had been a notably growth trend in research outputs, along with more participation and collaboration of institutes and countries. Research collaborative papers shifted from national inter-institutional to international collaboration. The decreasing share of world total and independent articles by the seven major industrialized countries (G7) was examined. Aerosol research in environmental and chemical related fields other than in medical fields was the mainstream of current years. Finally, author keywords, words in title and keywords plus were analyzed contrastively, with research trends and recent hotspots provided. <s> BIB001 </s> A Survey on Modeling Language Evolution in the New Millennium <s> Bibliographic Production of Language Evolution Models <s> UNLABELLED ::: This study uses a bibliometric approach to identify global trends related to the municipal solid waste (MSW). It applies related literature in the Science Citation Index Expanded (SCI-EXPANDED), Social Sciences Citation Index (SSCI), Conference Proceedings Citation Index - Science (CPCI-S) and Conference Proceedings Citation Index - Social Science & Humanities (CPCI-SSH), retrieved from the ISI Web of Science. The data used covers the period from 1997 to 2014. Analyzed aspects included document type, and publication output as well as distribution of journals, subject category, countries, institutions, title-words, author keywords, and keywords plus. An evaluating indicator, citation score, was applied to characterize the MSW publications. The research outputs of MSW had steadily increased in the field of energy fuels, engineering chemical and biotechnology applied microbiology, especially environmental sciences and engineering environmental. The predominance of Chinese institutions in terms of article count and a predominance of industrialized countries' institutions in terms of citation score were compared. Finally, author keywords, words in title, and keywords plus were analyzed to provide research emphasis, with the developing trends and recent hotspots provided. ::: ::: ::: IMPLICATIONS ::: A systematic overview of scientific literature dealing with municipal solid waste (MSW) is provided by a bibliometric analysis. The analysis of author keywords helps in drawing the research trends in a special perspective. Research studies on food waste, life cycle assessment (LCA), and renewable energy tend to be a new research focus in the area of MSW. The conclusions could provide a reference to the decision making and policy of MSW management for the government to some extent. <s> BIB002 </s> A Survey on Modeling Language Evolution in the New Millennium <s> Bibliographic Production of Language Evolution Models <s> The interest in language evolution by various disciplines, such as linguistics, computer science, biology, etc., makes language evolution models an active research topic and many models have been defined in the last decade. In this work, an overview of computational methods and grammars in language evolution models is given. It aims to introduce readers to the main concepts and the current approaches in language evolution research. Some of the language evolution models, developed during the decade 2003---2012, have been described and classified considering both the grammatical representation (context-free, attribute, Christiansen, fluid construction, or universal grammar) and the computational methods (agent-based, evolutionary computation-based or game theoretic). Finally, an analysis of the surveyed models has been carried out to evaluate their possible extension towards multimodal language evolution. <s> BIB003
|
In an attempt to gain a better understanding of the current trends regarding language evolution models, the analysis of the temporal evolution and scientific impact of articles being published in 2001-2014 and dealing with each of the models, previously introduced in the section "Analyzed Language Evolution Models", has been carried out. The analysis of scientific production, based on bibliographic data, is one of the most widely used methods for obtaining indicators about temporal evolution, variations, and trends in a specific field of research. Several works BIB002 BIB001 have applied this kind of analysis in the study of research trends. Consistent with the approach applied in these works, to find the most relevant papers to be included in our analysis, we start from the language evolution models surveyed by Grifoni et al. BIB003 and summarized in Table 1 . Therefore, for each of the ten analyzed language evolution models, we have considered the number of published papers in the 14 year period 2001-2014 that we have obtained by the authors themselves by asking them for the bibliographic production of their language evolution models. This process yielded 52 papers to be included in our bibliographic analysis (see "Appendix A"). Moreover, we have integrated these papers with those resulted from a systematic search (using two relevant search engines, i.e. Web of Science (WoS) and Scopus) for scientific papers published from 2001 to 2017 (end of June) and dealing with the ten analyzed language evolution models surveyed by Grifoni et al. BIB003 . This process yielded further 32 papers to be included in our bibliographic analysis (see "Appendix A", orange rows for papers retrieved from Scopus and green rows for papers retrieved from WoS). Moreover, we have considered the number of citations and self-citations (retrieved from Google scholar at the end of July 2017) of these papers that give a measure of the scientific impact of these models. The analysis of the number of publications and the number of citations of each class of language evolution models is provided in the following sub-sections.
|
A Survey on Modeling Language Evolution in the New Millennium <s> Analysis of the Scientific Production <s> The modern theory of evolutionary dynamics is founded upon the remarkable insights of R. A. Fisher and Sewall Wright and set forth in the loci classici The Genetical Theory of Natural Selection (1930) and ‘Evolution in Mendelian Populations’ (1931). By the time of the publication of Wright’s paper in 1931 all of the theory of population genetics, as it is presently understood, was established. It is a sign of the extraordinary power of these early formulations, that nothing of equal significance has been added to the theory of population genetics in the thirty years that have passed since that time. Yet we cannot take this period to mean that we now have an adequate theory of evolutionary dynamics. On the contrary, the theory of population genetics, as complete as it may be in itself, fails to deal with many problems of primary importance for an understanding of evolution. <s> BIB001 </s> A Survey on Modeling Language Evolution in the New Millennium <s> Analysis of the Scientific Production <s> The importance of game theoretic models to evolutionary theory has been in formulating elegant equations that specify the strategies to be played and the conditions to be satisfied for particular traits to evolve. These models, in conjunction with experimental tests of their predictions, have successfully described and explained the costs and benefits of varying strategies and the dynamics for establishing equilibria in a number of evolutionary scenarios, including especially cooperation, mating, and aggression. Over the past decade or so, game theory has been applied to model the evolution of language. In contrast to the aforementioned scenarios, however, we argue that these models are problematic due to conceptual confusions and empirical difficiences. In particualr, these models conflate the comptutations and representations of our language faculty (mechanism) with its utility in communication (function); model languages as having different fitness functions for which there is no evidence; depend on assumptions for the starting state of the system, thereby begging the question of how these systems evolved; and to date, have generated no empirical studies at all. Game theoretic models of language evolution have therefore failed to advance how or why language evolved, or why it has the particular representations and computations that it does. We conclude with some brief suggestions for how this situation might be ameliorated, enabling this important theoretical tool to make substantive empirical contributions. <s> BIB002
|
As shown in Fig. 2 , the total number of published papers (84 papers) in 2001-2017 varies from a minimum of 1 published paper in 2001 and 2017 to 14 published papers in 2007. In particular, considering the classification of methods based on the computational modeling paradigm, the scientific production of agent-based models is distributed across all 17 reference years (see Fig. 3a ), with a peak of 6 papers in 2003. The scientific production of evolutionary computation-based models has been concentrated in the period from 2002 to 2012 (see Fig. 3b ), growing from 1 paper in 2002, reaching peaks of 5 papers in 2007, and concluding with 2 papers in 2012. Finally, game-theoretic models (see Fig. 3c ) had the less continuous scientific production, with the first publications in 2003 and the last one in 2015, reaching a peak of four published papers in 2007 and no papers in the period 2009-2010. Moreover, we can observe that half of the models were based on evolutionary computation and almost the half was based on agents and, only two models are based on game theory. Agent-based models have the highest scientific production with 45 published papers, followed by evolutionary computation-based models with 25 published papers, and game-theoretic models with 21 published papers. However, the models belonging to the game-theoretic class are only two, compared with four models of the agent-based group and five models of the evolutionary computation-based group. Therefore, considering the average production per model, the agent-based models remain the one with the highest scientific production (11.25 papers/model), followed by game-theoretic models (10.5 papers/model), and by evolutionary computation-based models (5 papers/model). This analysis shows that the agent-based models are the most prolific in terms of published papers and they have the most continuous bibliographic production throughout the period 2001-2017. Comparing the temporal evolution of the bibliographic production of language evolution models, as shown in Fig. 4 , we can observe that agent-based models were the most applied models in the first years of the observed period (2001) (2002) (2003) (2004) . Subsequently, they were outclassed by evolutionary computation-based models, which were the most prolific from 2004 to 2007. Finally, agent-based models returned to being the most widely used from 2009 to 2017. This trend reflects perfectly the evolution that occurs in language evolution research. Since the 90s, indeed, several studies were developed for simulating language evolution in a bottom-up fashion using populations of agents. The developed agent-based models allow compensating for the lack of empirical evidence present in many language evolution theories developed during the 80s and 90s based on incomplete or absent evidence. In the early twenty-first century, the majority of modeling efforts were concentrated to study the evolutionary dynamics of language transmission by applying various biological principles [e.g., reproduction, mutation, selection, recombination (crossover), and survival of the fittest]. These evolutionary computation-based models arise from the need to simplify complex agent-based models relying on sets of equations whose complexity grows exponentially with the complexity of the language to be modeled. In the first 80s, a game-theoretic perspective was developed by Smith BIB001 for modeling the evolution of behavior. In 2003-2015, this perspective has been applied in game-theoretic models to study the evolution of language with the aim of aggregating the behavior of a population and defining general mathematical equations that model the evolution of this behavior. These models remain mainly conceptual and not largely applied, probably for the problems highlighted by Watumull and Hauser BIB002 concerning conceptual confusions and empirical deficiencies. Afterward, we have considered the classification of models based on the grammatical representation and we have analyzed the scientific production of the five classes of models. CFG-based models have a scientific production distributed across 11 years (from 2002 to 2012), with peaks of three papers in 2003, 2005, 2007 , and 2009 (see Fig. 5a ). The scientific production of AG-based models has been concentrated in the period from 2005 to 2007 (see Fig. 5b ), reaching a peak of 2 papers in 2007. Papers on CG-based models have been published during 2 years, 2007 and 2011, with two papers per year (see Fig. 5c ). The scientific production of FCGbased models has been concentrated in the period from 2006 to 2017 (see Fig. 5d ), with a peak of three papers in 2011. Finally, UG-based models (see Fig. 5e ) had the most continuous scientific production, with the first publications in 2001 and the last one in 2016, reaching a peak of seven published papers in 2007. UG-based models have the highest scientific production with 43 published papers, followed by CFG-based models with 21 published papers, FCG-based models with Fig. 4 Comparison of the bibliographic production of language evolution models 12 papers, and AG-based and CG-based models with 4 published papers. However, only one model belongs to the CG-based and FCG-based classes, compared with three models of CFG-based and UG-based groups. Therefore, considering the average production per model, the UG-based models remain the class with the highest scientific production (14.33 papers/model), followed by FCG-based models (12 papers/model), CFG-based models (7 papers/model), CG-based models (4 papers/ model), and AG-based models (2 papers/model). This analysis shows that the UG-based models are the most prolific in terms of published papers and they have the most continuous bibliographic production throughout the period 2001-2017. This fact can be justified by the fact that UGs, Fig. 5 Bibliographic production of language evolution models classified according to the grammatical representation as well as CFGs, provide a general theoretic grammatical framework with very few constraints that can be easily adapted to represent linguistic evolution. On the contrary, CG-based, AG-based, and FCG-based models provide an attempt to apply specialized grammars for representing the evolution of domain-specific languages, and therefore, they have not had a large following.
|
A Survey on Modeling Language Evolution in the New Millennium <s> Validation Strategies <s> Language is such an important human characteristics that we would like to know how it first came into existence and how it managed to reach its present form. If we could go back in time for a sufficient number of generations we would encounter human ancestors that did not have language. In a succession of generations the descendants of those non-linguistic ancestors came to possess the ability to speak and to understand the speech of others. What made the transition possible? How did the transition occur? Were there intermediate forms of language in the sense of communication systems different from known animal communication systems but also different from language as we know it and as is spoken today by all humans? <s> BIB001 </s> A Survey on Modeling Language Evolution in the New Millennium <s> Validation Strategies <s> Proceeding of the 6th International Conference (EVOLANG6), celebrada en Roma (Italia) del 12 al 15 de abril de 2006. <s> BIB002 </s> A Survey on Modeling Language Evolution in the New Millennium <s> Validation Strategies <s> The historical origins of natural language cannot be observed directly. We can, however, study systems that support language and we can also develop models that explore the plausibility of different hypotheses about how language emerged. More recently, evolutionary linguists have begun to conduct language evolution experiments in the laboratory, where the emergence of new languages used by human participants can be observed directly. This enables researchers to study both the cognitive capacities necessary for language and the ways in which languages themselves emerge. One theme that runs through this work is how individual-level behaviours result in population-level linguistic phenomena. A central challenge for the future will be to explore how different forms of information transmission affect this process. <s> BIB003 </s> A Survey on Modeling Language Evolution in the New Millennium <s> Validation Strategies <s> Abstract This paper revisits the key questions in current thinking in evolutionary linguistics, reviews the alleged stages during language evolution, and evaluates the mainstream hypotheses on language emergence, namely innatism and emergentism. We summarize both the supporting and opposing arguments for these hypotheses and evaluate two scenarios respectively following these hypotheses. As we will show, many of these arguments require an interdisciplinary collaboration between linguistics and other disciplines such as cognitive sciences, psychology, neuroscience, genetics, animal behaviors, and computer simulation, which illustrates the interdisciplinary nature of evolutionary linguistics and highlights the opportunities for future engagement of our discipline. <s> BIB004
|
Generally, the validation of language evolution models is carried out by comparing the outcomes of the model with the reality it exemplifies. The different validation techniques applied in the literature can be grouped into three strategies: analytic techniques, computer simulation, and experiments. Analytic techniques use mathematical equations that typically describe the evolving system. Solving these equations allows predicting the global evolution of the system. The global quantities used in analytic models are normally measured by empirical observation. If these data cannot be observed, as in language evolution, this strategy can be applied to the outcome of computer simulation. The main disadvantage of this strategy stays in the fact that finding the global quantities and the mathematical equations that describe language evolution is not a trivial task, even for a large number of non-linear dynamical systems, no solution can be found. As language evolution falls into this category, it is hard to formulate equations that are powerful enough to produce verifiable predictions. Computer simulation allows studying the dynamics of language evolution, reconstructing the trajectories of changes, and recapitulating the effect of relevant factors on evolution BIB004 . It attempts to simulate the conceptual model of a system by a computer program BIB001 . This validation strategy also includes embodied simulations that use hardware-based models, such as robots. The computer program contains a set of hypotheses on the causes, mechanisms, and processes that govern the analyzed phenomenon represented by the model. Running the program allows observing and manipulating the parameters, conditions, and variables that control the phenomenon represented by the model, and to observe the responses to these manipulations. Computer simulation is particularly useful in cases where analytical models are not applicable due to the high complexity or non-linearity of the modeled system. In these cases, simulation allows testing language evolution models in a virtual experimental laboratory. Simulation also provides a more practical way of discovering new predictions that can be derived from the model. On the contrary, the main disadvantage of computer simulation is that the results could vary greatly in the real world due to unforeseen factors. Moreover, it can be quite expensive in terms of time and necessary resources. Experiments consist of the examination of the real system that has been modeled and in the demonstration that specific outcomes occur when certain environmental parameters or system condition is changed. In the specific field of language evolution, natural experiments should involve humans and their brain reactions for observing how the language evolves. First attempts of natural experiments for validating language emergence and evolution were reviewed by Steels BIB002 ; he argued that natural experiments are not sufficiently controllable for being a solid experimental method for language evolution. Afterward, Scott-Phillips and Kirby BIB003 reviewed laboratory-based experiments that use human participants to observe both the cognitive capacities required for language and the ways symbolic communication systems emerge and evolve. In addition to natural experiments, also artificial experiments may be performed, which use robots for reproducing human perceptive, cognitive, and linguistic abilities, and manipulate them for observing the emergence and evolution of language. Although artificial experiments have some characteristics in common with computer simulation, experiments require both more rigorous assumptions that need to be implemented in the robot and a more stringent way to test the realism of these assumptions BIB002 . For instance, if we want to validate the capacity to learn a new vocabulary word, using artificial experiments, we have to implement the perception and memory capacities of the robot, while it is not necessary using computer simulation. This is the main reason that prevents the use of artificial experiments towards computer simulation.
|
A Survey on Modeling Language Evolution in the New Millennium <s> Language Evolution Models Validated Through Analytic Techniques <s> This paper describes Christiansen grammar evolution (CGE), a new evolutionary automatic programming algorithm that extends standard grammar evolution (GE) by replacing context-free grammars by Christiansen grammars. GE only takes into account syntactic restrictions to generate valid individuals. CGE adds semantics to ensure that both semantically and syntactically valid individuals are generated. It is empirically shown that our approach improves GE performance and even allows the solution of some problems are difficult to tackle by GE <s> BIB001 </s> A Survey on Modeling Language Evolution in the New Millennium <s> Language Evolution Models Validated Through Analytic Techniques <s> This paper describes Grammar-based Immune Programming (GIP) for evolving programs in an arbitrary language by immunological inspiration. GIP is based on Grammatical Evolution (GE) in which a grammar is used to define a language and decode candidate solutions to a valid representation (program). However, by default, GE uses a Genetic Algorithm in the search process while GIP uses an artificial immune system. Some modifications are needed of an immune algorithm to use a grammar in order to efficiently decode antibodies into programs. Experiments are performed to analyze algorithm behavior over different aspects and compare it with GEVA, a well known GE implementation. The methods are applied to identify a causal model (an ordinary differential equation) from an observed data set, to symbolically regress an iterated function f(f(x)) = g(x), and to find a symbolic representation of a discontinuous function. <s> BIB002
|
Analytic techniques have been used to evaluate three of the ten language evolution models, belonging to the class of evolutionary computation-based models, as shown in the first column of Table 2 . Generally, the metrics used for validating evolutionary computation-based models (in particular, GE, AGE, and CGE) is the cumulative frequency of success, which is usefully applied in evolutionary computation for measuring the probability of finding a solution to a problem in a specific number of generations. It is defined as the number of runs where the solution to the problem was found BIB002 . GE, AGE, and CGE have been validated and compared using this metrics. Specifically, Table 3 shows the parameters that the authors of these three language evolution models have adopted for the experiments, i.e., the population size in terms of number of individuals used for the genetic algorithm, and crossover and mutation ratio, which are the probabilities of generating new individuals by crossover and mutation operations, and, finally, the number of generations, which represents the maximum number of runs of the algorithm. The last column of Table 3 shows the performance of the models in terms of cumulative frequency of success. Comparing the results of the validation is not a feasible task, due to the differences in mutation ratio. However, the authors of the models provided some comparative results in their works. Ortega et al. BIB001 compared GE and CGE performance, showing that the cumulative success frequency of GE is 79% and CGE is 76% after 100 runs of the algorithms. Moreover, a comparison between GE and AGE is performed in showing a cumulative success frequency of 97% for GE and 95% for AGE after 100 runs of the algorithms. Therefore, GE results in higher performance than AGE and CGE.
|
A Survey on Modeling Language Evolution in the New Millennium <s> Language Evolution Models Validated Through Computer Simulation <s> Abstract : As a result of this grant, the researchers have now published oil CDROM a corpus of over 4 million words of running text annotated with part-of- speech (POS) tags, with over 3 million words of that material assigned skeletal grammatical structure. This material now includes a fully hand-parsed version of the classic Brown corpus. About one half of the papers at the ACL Workshop on Using Large Text Corpora this past summer were based on the materials generated by this grant. <s> BIB001 </s> A Survey on Modeling Language Evolution in the New Millennium <s> Language Evolution Models Validated Through Computer Simulation <s> This paper investigates the effect of predefining semantics in modelling the evolution of compositional languages versus allowing agents to develop these semantics in parallel with the development of language. The study is done using a multi-agent model of language evolution that is based on the Talking Heads experiment. The experiments show that when allowing a co-evolution of semantics with language, compositional languages develop faster than when the semantics are predefined, but compositionality appears more stable in the latter case. The paper concludes that conclusions drawn from simulations with predefined meanings, which most studies use, may need revision. <s> BIB002
|
Computer simulation has been used to validate the following four language evolution models: GRAEL, ILM, GD, and EGT. GRAEL is validated using the F1 score (or F score), which provides a measure of the experiment's accuracy and can be interpreted as a weighted average of the precision and recall metrics. F1 score is formally defined as: where the recall gives a measure of the completeness of the model as it is the number of correct results divided by the number of results that should have been returned, and the precision shows how correct the model is as it measures the number of correct results divided by the number of all returned results. The Wall Street Journal (WSJ) BIB001 is used as a corpus of annotated sentences for training and testing the population of 100 agents engaged in a series of language games. The obtained value of F1 score (around 81% ) indicates that the GRAEL model performs quite well; that is, the mutated grammar is able to create new evolved parses for understanding more difficult constructions. ILM is validated using compositionality and (communicative) accuracy. The former is defined as a representation of how the meaning of the whole can be described as a function of the meaning of its parts. Compositionality is calculated as the proportion between the number of compositional rules used (both encoded and decoded) and the total number of utterances produced and interpreted BIB002 . A high value of compositionality means the emergence of linguistic structures in the language evolution process. The latter is calculated as the fraction of agents that could successfully interpret the produced utterances of the other agents in the population, averaged over the number of games played during the testing phase . The ILM experiment consists of language games run with a population of 2 agents (1 adult and 1 learner) for 250 iterations. In each game, the adult encodes an utterance to convey the meaning of one of 120 objects, while the learner decodes this utterance constructing its private grammar ontogenetically. At the end of the 250 iterations, the results showed a compositionality that is around 0.89 and an accuracy of around 0.85 . This means that ILM performs well in modeling the emergence and evolution of compositional languages. Game-theoretic models (i.e., GD and EGT) do not apply specific metrics, because they do not compare the results of the simulation against the expected results, but they use computer simulation for addressing trajectories of evolutionary change, revealing how the language of modeled populations changes over evolutionary time (e.g., one equilibrium is reached, the system cycles endlessly, etc.). Specifically, GD is validated simulating a communication game between two speakers with various probabilities of understanding each other and analyzing under which conditions the UGs learned by the two speakers are evolutionarily stable against invasion by each other. EGT is validated simulating the emergence of a protolanguage in an initially prelinguistic society consisting of 100 individuals. Each of them plays a round of the game, which consists of interacting with every other individual with the aim of associating five objects to five signals (sounds). At the end of each round, the total payoff of all individuals is calculated, according to the communication success, and a proportional number of offspring is generated. After 20 rounds, EGT reaches an evolutionarily stable solution. Computer simulation strategy does not provide a unique metrics of the performance of the language evolution models. GRAEL, indeed, used the F1 score, ILM adopted the compositionality and accuracy, and finally, GD and EGT evaluated the evolutionary stability. This is the main reason that makes these models incomparable.
|
A Survey on Modeling Language Evolution in the New Millennium <s> Trends and Future Perspectives <s> In this paper we investigate the application of tree-adjunct grammars to grammatical evolution. The standard type of grammar used by grammatical evolution, context-free grammars, produce a subset of the languages that tree-adjunct grammars can produce, making tree-adjunct grammars, expressively, more powerful. In this study we shed some light on the effects of tree-adjunct grammars on grammatical evolution, or tree-adjunct grammatical evolution. We perform an analytic comparison of the performance of both setups, i.e., grammatical evolution and tree-adjunct grammatical evolution, across a number of classic genetic programming benchmarking problems. The results firmly indicate that tree-adjunct grammatical evolution has a better overall performance (measured in terms of finding the global optima). <s> BIB001 </s> A Survey on Modeling Language Evolution in the New Millennium <s> Trends and Future Perspectives <s> A new evolutionary design tool is presented, which uses shape grammars and a grammar-based form of evolutionary computa- tion, grammatical evolution (GE). Shape grammars allow the user to specify possible forms, and GE allows forms to be iteratively selected, <s> BIB002 </s> A Survey on Modeling Language Evolution in the New Millennium <s> Trends and Future Perspectives <s> In this paper, we show an application of Adaptable Grammars to language evolution. An adaptable grammar may be defined as a logically based transformational grammar formalism in which the grammar itself may be affected in a derivation step. This grammar formalism was originally intended for describing software systems and programming languages. For the field of natural language analysis, the main advantage of adaptable grammars over other types of formal grammars is the idea of evolution. Adaptable grammars are dynamic entities in which novelties appearing in lexical units or language structure can create new or modify existing grammar rules. Taking into account this idea of ‘dynamicity’, we suggest the possibility of applying adaptable grammars to natural language change. <s> BIB003
|
From the analysis of the bibliographic production and scientific impact of the ten analyzed language evolution models, we can observe that their current trend is oriented toward the use of agent-based and UG-based models, both for the highest number of published papers and the highest citation count. The reason for that relies on the fact that agent-based models are more suited for simulating the evolution of complex systems composed of behavioral entities (and human-language evolution falls in this category) and they make the model closer to the real behavior thanks to the support of robust empirical evidence. The need for empirical evidence comes from the necessity to go beyond idealizations and approximations of the language evolution phenomena. Without empirical evidence, indeed, the language evolution process is only numerically determined and, consequently, that can lead to an unrealistic representation of the reality. Therefore, from 2001 to 2017, the research on language evolution is evolved toward the use of agent-based models because supported by empirical evidence compared to evolutionary computation-based and game-theoretic models. The motivation for the use of UG-based models relies on the fact that the general theoretic grammatical framework provided by UGs turns out to be easily adapted to represent linguistic evolution. Despite that, the necessity of grammatical formalisms equipped with structures and constructions able to represent semantic features of the language is also emerged during the surveyed period (see Fig. 10 ). In particular, models developed after 2005 (AGE, LEVER, CGE, and FCGlight) were oriented towards adding semantics and adaptability to the language representation. With regard to the future trend of language evolution, we asked the authors of the ten analyzed language evolution models whether and how their research on language evolution models is evolving in recent years. In Table 4 , a summary of the answers received from authors about the evolution of their research is given. Most Fig. 10 Evolution of the ten surveyed language evolution models with respect to the semantic representation of the authors did not continue this research after the development of the model due to various reasons, mainly the end of project funding and a different research agenda (GRAEL, FCGLight, ILM, and EGT). Some authors (GE 2 ) have focused their research on alternative grammatical formalisms by experimenting on how the language evolution model performs with different kinds of context-free and contextsensitive grammars. Some authors (LEVER) have worked on a further abstraction of the language evolution process using metamodels and representing the language evolution as a transformation between metamodels of language. Finally, the authors of GD have focused their research on the cognitive aspects of language evolution studying the phenomenon at the neural synaptic level and trying to simulate through neural networks the evolution that happens in human language. Looking at possible future perspectives in language evolution research, in our opinion, one of the main open challenges consists in the necessity of advancements in neuroscientific research, as expressed also by several authors in their scientific works . Neuroscience, indeed, represents the gateway to understanding the biological mechanisms of language and, consequently, can provide the empirical evidence of the neural processes allowing the formulation of new hypotheses about language evolution. This challenge matches also with the research undertaken by the authors of GD (see Table 4 ). As a further future perspective, language evolution models should take into account multimodal aspects of language. Many models developed in the literature Table 4 Answers from authors about the evolution of their research on language evolution models Research evolution GRAEL The authors did not continue research on language evolution, due to reasons of time and a different research agenda LEVER The authors evolved their research on language evolution towards modeling languages tailored to a specific domain and defined by a metamodel. They faced the problem of migrating existing models to a new version of their metamodel and proposed an approach, named COPE, that specifies the coupled evolution of metamodels and models to reduce migration effort (GE) BIB003 After the publication of the (GE) 2 model, the authors have explored the use of different types of grammars ranging from context-free to context-sensitive including Tree-adjunct Grammars BIB001 , Attribute Grammars , and Shape Grammars BIB002
|
A Survey on Modeling Language Evolution in the New Millennium <s> GD <s> One of the main challenges of Human Computer Interaction researches is to improve naturalness of the user’s interaction process. Currently two widely investigated directions are the adaptivity and the multimodality of interaction. Starting from the adaptivity concept, the paper provides an analysis of methods that make multimodal interaction adaptive respect to the final users and evolutionary over time. A comparative analysis between the concepts of adaptivity and evolution, given in literature, is provided, highlighting their similarities and differences and an original definition of evolutionary multimodal interaction is provided. Moreover, artificial intelligence techniques, quantum computing concepts and evolutionary computation applied to multimodal interaction are discussed. <s> BIB001 </s> A Survey on Modeling Language Evolution in the New Millennium <s> GD <s> Scientists studying the communication of non-human animals are often aiming to better understand the evolution of human communication, including human language. Some scientists take a phylogenetic perspective, where the goal is to trace the evolutionary history of communicative traits, while others take a functional perspective, where the goal is to understand the selection pressures underpinning specific traits. Both perspectives are necessary to fully understand the evolution of communication, but it is important to understand how the two perspectives differ and what they can and cannot tell us. Here, we suggest that integrating phylogenetic and functional questions can be fruitful in better understanding the evolution of communication. We also suggest that adopting a multimodal approach to communication might help to integrate phylogenetic and functional questions, and provide an interesting avenue for research into language evolution. <s> BIB002 </s> A Survey on Modeling Language Evolution in the New Millennium <s> GD <s> Our understanding of the cognitive and neural underpinnings of language has traditionally been firmly based on spoken Indo-European languages and on language studied as speech or text. However, in face-to-face communication, language is multimodal: speech signals are invariably accompanied by visual information on the face and in manual gestures, and sign languages deploy multiple channels (hands, face and body) in utterance construction. Moreover, the narrow focus on spoken Indo-European languages has entrenched the assumption that language is comprised wholly by an arbitrary system of symbols and rules. However, iconicity (i.e. resemblance between aspects of communicative form and meaning) is also present: speakers use iconic gestures when they speak; many non-Indo-European spoken languages exhibit a substantial amount of iconicity in word forms and, finally, iconicity is the norm, rather than the exception in sign languages. This introduction provides the motivation for taking a multimodal approach to the study of language learning, processing and evolution, and discusses the broad implications of shifting our current dominant approaches and assumptions to encompass multimodal expression in both signed and spoken languages. <s> BIB003 </s> A Survey on Modeling Language Evolution in the New Millennium <s> GD <s> One reason for the apparent gulf between animal and human communication systems is that the focus has been on the presence or the absence of language as a complex expressive system built on speech. But language normally occurs embedded within an interactional exchange of multi-modal signals. If this larger perspective takes central focus, then it becomes apparent that human communication has a layered structure, where the layers may be plausibly assigned different phylogenetic and evolutionary origins—especially in the light of recent thoughts on the emergence of voluntary breathing and spoken language. This perspective helps us to appreciate the different roles that the different modalities play in human communication, as well as how they function as one integrated system despite their different roles and origins. It also offers possibilities for reconciling the ‘gesture-first hypothesis’ with that of gesture and speech having evolved together, hand in hand—or hand in mouth, rather—as one system. <s> BIB004 </s> A Survey on Modeling Language Evolution in the New Millennium <s> GD <s> The interest in language evolution by various disciplines, such as linguistics, computer science, biology, etc., makes language evolution models an active research topic and many models have been defined in the last decade. In this work, an overview of computational methods and grammars in language evolution models is given. It aims to introduce readers to the main concepts and the current approaches in language evolution research. Some of the language evolution models, developed during the decade 2003---2012, have been described and classified considering both the grammatical representation (context-free, attribute, Christiansen, fluid construction, or universal grammar) and the computational methods (agent-based, evolutionary computation-based or game theoretic). Finally, an analysis of the surveyed models has been carried out to evaluate their possible extension towards multimodal language evolution. <s> BIB005
|
The authors have explored how to set up stochastic dynamics to represent a population of language learners that can spontaneously move from one equilibrium point to another in a way that resembles documented language change They worked also on a simulation of the evolution of neural synaptic coding, to evolve neural networks and manipulate information in something vaguely like what happens in language ILM The authors did not continue to work on the ILM model for language evolution EGT The authors did not continue to work on the EGT model for language evolution have followed a unimodal approach, according to which language is expressed in a single modality, mainly speech and/or text, "thus ignoring the wealth of additional information available in face-to-face communication" BIB003 . However, other significant research has been conducted in recent years BIB005 BIB004 BIB003 BIB002 that highlights the importance of abandoning the traditional distinctions among modalities in language evolution research and pursuing, instead, an integrated vision that combines all modalities (such as gestures, facial expressions, etc.) into a multimodal language [10, 16-20, 30, 42, 43] . Caschera et al. BIB001 also highlighted the necessity of tools for modeling the evolution of multimodal dialog in long-term changing situations. Therefore, we envision for the next years a research effort towards the development of multimodal approaches to modeling language evolution.
|
A Survey on HTTPS Implementation by Android Apps: Issues and Countermeasures <s> Introduction <s> This paper focuses on the threat of packet sniffing in a switched environment, and briefly explores the effect in a non-switched environment. Detail is given on a number of techniques, such as "ARP (Address Resolution Protocol) spoofing", which can allow an attacker to eavesdrop on network traffic in a switched environment. Third party tools exist that permit sniffing on a switched network. The result of running some of these tools on an isolated, switched network is presented, and clearly demonstrates that the threat they pose is real and significant. <s> BIB001 </s> A Survey on HTTPS Implementation by Android Apps: Issues and Countermeasures <s> Introduction <s> The security of wireless LAN is being strengthened through a combination of EAP with an 802.1X-based authentication server, with 802.11i as the standard. As such a security technique effectively defends the existing WEP or protocol vulnerabilities, another approach is needed to prove the vulnerability of the wireless LAN. This paper proposes the wireless MITM (man in the middle) framework, which can actively prove the vulnerability of MITM by applying the MITM technique in the wireless LAN environment, whose security is applied using 802.1X and EAP. It also describes the designing of the required functions and modules. This paper proposes that it is possible to collect the wireless LAN authentication information of the unauthorized user using the designed wireless MLTM-framework under the EAP-MD5 and EAP-TTLS environment. <s> BIB002
|
The Android mobile platform was introduced in 2007 by Google and the Open Handset Alliance. Due to its open nature and support from both Google and third-party developers, it has become the most widespread mobile operating system as of 2014 . The ease of entry to Android development has allowed the platform to expand to its current size; however, this free-for-all environment exacerbates issues in application security and user privacy. The security of Android is the burden which individual developers must bear and the standards for security are not always clear. Developers writing applications for Android must consider how their code will assure user safety while simultaneously calculating for minimal memory usage, battery life, and weak processing power. Their apps must comply to security protocols, launch as their own UID, sign their code, and minimize permissions . Needless to say, lost in the innumerable tasks of application creation and deployment, security errors are undeniably frequent. In this survey, the primary focus will be on the insecure development of Internet-connected non-browser Android applications and the implementation of HTTPS, potential remedies, and suggestions for further research. The increased shift of consumer electronics to the mobile realm and the development of a wide range of applications that has followed has meant a steady increase in the amount of personal, critical and confidential information that flows in and out of mobile devices . These handhelds use channels such as public WiFi which, even with modern protections, can be vulnerable BIB002 . Packets can be easily sniffed and manipulated when sent in plaintext HTTP messages over these networks BIB001 . There are many mechanisms which satisfy the goal of protecting packets, but the SSL and TLS protocol built into HTTPS has become the de facto suite, though it may not deserve the unchecked faith it receives . Thus TLS and its implementations will be the system investigated as we continue. While web browsers are generally able to implement HTTPS connections securely since they are man-aged by enormous teams of engineers or contributors, Android applications do not have this sort of oversight. The widespread and mostly unsupervised creation of Android applications has allowed for security loopholes to appear in programs which use HTTPS calls . According to the Bureau of Labor Statistics, software jobs in the US are set to grow by 30% by 2022 . It is essential that both new and experienced developers are able to properly tackle loopholes in Android security. The Android platform has several encryption and security suites. It hosts a large Java encryption library and wellrespected and versatile third-party implementations such as Bouncy Castle and OpenSSL [11] . There are several different methods of implementing HTTPS built directly into the platform. These methods frequently require no custom code to function securely. In addition, the Android development training website hosts several walkthroughs on HTTPS [12] . Despite the need for transport-layer encryption and the ready availability of encryption mechanisms, many Android applications simply do not implement HTTPS when they should or their code alters the HTTPS implementation in a way that makes the application vulnerable. In these cases, user data are susceptible to Man in the Middle attacks. As shown in Fig. 1 , MitM attacks allow for a malicious actor (E) to eavesdrop, intercept and insert itself into a conversation between two legitimate users (A and B). This has become one of the most pressing threats to wireless and cellular communications. More unfortunately, there is frequently no warning to the user that these vulnerable connections are not secured by SSL/TLS. Issues remain in SSL libraries, the TLS and X.509 certificate validation protocol, and server-side configurations. As will be discussed in later sections, cleaning up the SSL universe to protect user data requires the cooperation of more parties than just Android application developers. It is imperative that proper encryption is used in all applications which process user data over the Internet. This paper will analyze why developers do not (or are unable to) implement secure HTTPS connections and present an idea for a solution to the gap between theoretical security and implemented HTTPS security in Android. We will look at the state of the art research in fields beyond the mobile realm to detect trends in security and ascertain ways to harden the HTTPS environment on Android. The remainder of this paper is organized in the following way. Section 2 contains a summary of HTTPS, its proper usage on the Android platform, and the major relevant findings contributed by security researchers. Section 3 provides a deeper interpretation and grouping of these results including a listing and discussion of causes of HTTPS misuse. Section 4 provides a listing of potential solutions which have been suggested by security researchers. Section 5.1 gives the observed gaps in current Android HTTPS research. Section 5.2 contains concrete suggestions for future research which fulfill the some of the solutions suggested in Section 3 or bridge holes in current understanding noted in Section 5.1. The paper is concluded in Section 6.
|
A Survey on HTTPS Implementation by Android Apps: Issues and Countermeasures <s> Overview of Android HTTPS and current findings <s> A large number of software security vulnerabilities are caused by software errors that are committed by software developers. We believe that interactive tool support will play an important role in aiding software developers to develop more secure software. However, an in-depth understanding of how and why software developers produce security bugs is needed to design such tools. We conducted a semi-structured interview study on 15 professional software developers to understand their perceptions and behaviors related to software security. Our results reveal a disconnect between developers' conceptual understanding of security and their attitudes regarding their personal responsibility and practices for software security. <s> BIB001 </s> A Survey on HTTPS Implementation by Android Apps: Issues and Countermeasures <s> Overview of Android HTTPS and current findings <s> In this article we discuss a programming model to establish a secure communication channel by using HTTPS protocol in Android platform, using some public key infrastructure features like public keys and digital certificates. <s> BIB002 </s> A Survey on HTTPS Implementation by Android Apps: Issues and Countermeasures <s> Overview of Android HTTPS and current findings <s> Many Android apps have a legitimate need to communicate over the Internet and are then responsible for protecting potentially sensitive data during transit. This paper seeks to better understand the potential security threats posed by benign Android apps that use the SSL/TLS protocols to protect data they transmit. Since the lack of visual security indicators for SSL/TLS usage and the inadequate use of SSL/TLS can be exploited to launch Man-in-the-Middle (MITM) attacks, an analysis of 13,500 popular free apps downloaded from Google's Play Market is presented. We introduce MalloDroid, a tool to detect potential vulnerability against MITM attacks. Our analysis revealed that 1,074 (8.0%) of the apps examined contain SSL/TLS code that is potentially vulnerable to MITM attacks. Various forms of SSL/TLS misuse were discovered during a further manual audit of 100 selected apps that allowed us to successfully launch MITM attacks against 41 apps and gather a large variety of sensitive data. Furthermore, an online survey was conducted to evaluate users' perceptions of certificate warnings and HTTPS visual security indicators in Android's browser, showing that half of the 754 participating users were not able to correctly judge whether their browser session was protected by SSL/TLS or not. We conclude by considering the implications of these findings and discuss several countermeasures with which these problems could be alleviated. <s> BIB003 </s> A Survey on HTTPS Implementation by Android Apps: Issues and Countermeasures <s> Overview of Android HTTPS and current findings <s> SSL (Secure Sockets Layer) is the de facto standard for secure Internet communications. Security of SSL connections against an active network attacker depends on correctly validating public-key certificates presented when the connection is established. We demonstrate that SSL certificate validation is completely broken in many security-critical applications and libraries. Vulnerable software includes Amazon's EC2 Java library and all cloud clients based on it; Amazon's and PayPal's merchant SDKs responsible for transmitting payment details from e-commerce sites to payment gateways; integrated shopping carts such as osCommerce, ZenCart, Ubercart, and PrestaShop; AdMob code used by mobile websites; Chase mobile banking and several other Android apps and libraries; Java Web-services middleware including Apache Axis, Axis 2, Codehaus XFire, and Pusher library for Android and all applications employing this middleware. Any SSL connection from any of these programs is insecure against a man-in-the-middle attack. The root causes of these vulnerabilities are badly designed APIs of SSL implementations (such as JSSE, OpenSSL, and GnuTLS) and data-transport libraries (such as cURL) which present developers with a confusing array of settings and options. We analyze perils and pitfalls of SSL certificate validation in software based on these APIs and present our recommendations. <s> BIB004 </s> A Survey on HTTPS Implementation by Android Apps: Issues and Countermeasures <s> Overview of Android HTTPS and current findings <s> Developers use cryptographic APIs in Android with the intent of securing data such as passwords and personal information on mobile devices. In this paper, we ask whether developers use the cryptographic APIs in a fashion that provides typical cryptographic notions of security, e.g., IND-CPA security. We develop program analysis techniques to automatically check programs on the Google Play marketplace, and find that 10.327 out of 11,748 applications that use cryptographic APIs -- 88% overall -- make at least one mistake. These numbers show that applications do not use cryptographic APIs in a fashion that maximizes overall security. We then suggest specific remediations based on our analysis towards improving overall cryptographic security in Android applications. <s> BIB005 </s> A Survey on HTTPS Implementation by Android Apps: Issues and Countermeasures <s> Overview of Android HTTPS and current findings <s> The Secure Sockets Layer (SSL) is widely used to secure data transfers on the Internet. Previous studies have shown that the state of non-browser SSL code is catastrophic across a large variety of desktop applications and libraries as well as a large selection of Android apps, leaving users vulnerable to Man-in-the-Middle attacks (MITMAs). To determine possible causes of SSL problems on all major appified platforms, we extended the analysis to the walled-garden ecosystem of iOS, analyzed software developer forums and conducted interviews with developers of vulnerable apps. Our results show that the root causes are not simply careless developers, but also limitations and issues of the current SSL development paradigm. Based on our findings, we derive a proposal to rethink the handling of SSL in the appified world and present a set of countermeasures to improve the handling of SSL using Android as a blueprint for other platforms. Our countermeasures prevent developers from willfully or accidentally breaking SSL certificate validation, offer support for extended features such as SSL Pinning and different SSL validation infrastructures, and protect users. We evaluated our solution against 13,500 popular Android apps and conducted developer interviews to judge the acceptance of our approach and found that our solution works well for all investigated apps and developers. <s> BIB006 </s> A Survey on HTTPS Implementation by Android Apps: Issues and Countermeasures <s> Overview of Android HTTPS and current findings <s> When browsers report TLS errors, they cannot distinguish between attacks and harmless server misconfigurations; hence they leave it to the user to decide whether continuing is safe. However, actual attacks remain rare. As a result, users quickly become used to "false positives" that deplete their attention span, making it unlikely that they will pay sufficient scrutiny when a real attack comes along. Consequently, browser vendors should aim to minimize the number of low-risk warnings they report. To guide that process, we perform a large-scale measurement study of common TLS warnings. Using a set of passive network monitors located at different sites, we identify the prevalence of warnings for a total population of about 300,000 users over a nine-month period. We identify low-risk scenarios that consume a large chunk of the user attention budget and make concrete recommendations to browser vendors that will help maintain user attention in high-risk situations. We study the impact on end users with a data set much larger in scale than the data sets used in previous TLS measurement studies. A key novelty of our approach involves the use of internal browser code instead of generic TLS libraries for analysis, providing more accurate and representative results. <s> BIB007 </s> A Survey on HTTPS Implementation by Android Apps: Issues and Countermeasures <s> Overview of Android HTTPS and current findings <s> Content Delivery Network (CDN) and Hypertext Transfer Protocol Secure (HTTPS) are two popular but independent web technologies, each of which has been well studied individually and independently. This paper provides a systematic study on how these two work together. We examined 20 popular CDN providers and 10,721 of their customer web sites using HTTPS. Our study reveals various problems with the current HTTPS practice adopted by CDN providers, such as widespread use of invalid certificates, private key sharing, neglected revocation of stale certificates, and insecure back-end communication. While some of those problems are operational issues only, others are rooted in the fundamental semantic conflict between the end-to-end nature of HTTPS and the man-in-the-middle nature of CDN involving multiple parties in a delegated service. To address the delegation problem when HTTPS meets CDN, we proposed and implemented a lightweight solution based on DANE (DNS-based Authentication of Named Entities), an emerging IETF protocol complementing the current Web PKI model. Our implementation demonstrates that it is feasible for HTTPS to work with CDN securely and efficiently. This paper intends to provide a context for future discussion within security and CDN community on more preferable solutions. <s> BIB008 </s> A Survey on HTTPS Implementation by Android Apps: Issues and Countermeasures <s> Overview of Android HTTPS and current findings <s> The SSL man-in-the-middle attack uses forged SSL certificates to intercept encrypted connections between clients and servers. However, due to a lack of reliable indicators, it is still unclear how commonplace these attacks occur in the wild. In this work, we have designed and implemented a method to detect the occurrence of SSL man-in-the-middle attack on a top global website, Facebook. Over 3 million real-world SSL connections to this website were analyzed. Our results indicate that 0.2% of the SSL connections analyzed were tampered with forged SSL certificates, most of them related to antivirus software and corporate-scale content filters. We have also identified some SSL connections intercepted by malware. Limitations of the method and possible defenses to such attacks are also discussed. <s> BIB009
|
Cryptography is difficult to implement even with modern software BIB005 . In order to create resistant keys, complex algorithms and programming mechanisms are needed. Hundreds of algorithms are used for different steps in the encryption process . Adding to this, developers aren't always taught security best practices BIB001 . As computers increasingly grow in their processing capacity, so will the encryption systems grow in intricacy to maintain their defenses against brute force and man-in-themiddle (MITM) attacks. Internet systems have developed greater complexity with the influx of users, web stakeholders, and non-traditional server methodologies BIB008 . In order to secure user data and the integrity of Internet-connected applications, developers must be able to properly implement encryption technologies BIB005 . SSL/TLS is one such cryptographic system which requires a layer of abstraction in order to be usable to developers. SSL was developed to provide an endto-end encrypted data channel for servers and clients on wireless systems. Given that wireless technology is prone on the physical level to eavesdropping attacks based on RF broadcast interception, this cryptographic protocol is vital for the secure transfer of any data to and from cell phones . The cornerstone of SSL is the ability of the client to confirm without a doubt that the server contacted is the correct one. From here data can then be transferred with trust. To establish this state of trust, a complicated, mixed public-private key exchange takes place. This requires an extensive handshake and verification process to avoid sending the encryption key to any interceptors on the network who have latched on to the chain of communication. In the lowest level, SSL functions in the following way depicted in Fig. 2 . The client sends an HTTPS request to the server with its SSL version number and supported ciphers. The server responds with its SSL version number and ciphers as well as its certificate. This server certificate has been signed by a trusted certificate authority (CA) which has verified the servers authenticity. The client will compare the certificate's public key to its local key store and the field values to expected values. If the certificate passes, and the certificate has not been revoked by a CA (as determined by a query to a CA's certificate revocation list (CRL)), the handshake continues. The cipher suite is chosen from the algorithms which the client and server have in common. An example cipher suite could use ECDHE for key exchange, RSA for certificates, AES128- GCM for message encryption, and SHA256 for message integrity checking. A pre-master secret is encrypted with server's public key using a cipher suite which is in common between the two machines and transmitted to the server. If the client has been asked to verify itself with a certificate, this will be included with the secret and transmitted to the server. If the authenticity is confirmed, each machine uses the pre-master secret to generate the master -a session ID that functions as the symmetric key for the SSL communication. Once the handshake has been completed and each device informs the other that all further communication will be encrypted with the session ID, the client encrypts its messages using the symmetric key and sends the data to the server. Once all data is sent, the connection is terminated [19, . Digital certificates, the core of the SSL system, are based on the X.509 protocol . This protocol along with the Online Certificate Status Protocol (OCSP) establish how certificates are to be developed, validated, and revoked [23] . The major components of certificate checking are issuer verification, hostname validation and revocation checking. Each of these steps assures that the server in question is still trusted by a certificate authority. Within the certificate validation process, issues have arisen with servers signing their own certificates and certificates using wildcard hostnames (for example, * .google.com). These discrepancies are easily spotted and flagged by properly implemented SSL clients or humans. However, in situations where the functionality of X.509 has been compromised by custom code, such as removed revocation checks, these invalid certificates can be accepted -rendering the SSL process useless BIB003 . Without proper validation checks, any rogue access point can break into the chain of communication, send a random certificate to the user, and forward the packets to the original server, decrypting and reading all data flowing between the ends. Much in the way that higher-level programming languages obscure memory management to make the developers job more straightforward, so do many encryption suites try to make encryption and decryption a standard, humanunderstandable process. The papers and communications which become the foundation of SSL, TLS, and the many improvements, revisions, and decisions on these topics, come from the Internet Engineering Task Force (IETF) . These technical documents cannot be directly utilized by most developers. Thus, libraries and encryption suites take the technical documentation and develop a platform for applications to use. Libraries like OpenSSL [11] handle the TLS handshake for developers, leading to a more uniform and secure set of HTTPS connections. While these libraries have come under scrutiny due to security flaws [26, , their role is vital in the Internet and they have existed for years. OpenSSL, founded in 1998, is used by servers which comprise 66% of the web servers . SSL/TLS libraries are the 'physical' implementation of the IETF protocols. However, this code is not necessarily 'in the wild'. In this paper, we will use the term 'in the wild' to refer instead to consumer-facing applications. As libraries rely on protocols for guidance, consumer-facing implementations rely on libraries and, in effect, protocols, for guidance. For this paper, the primary 'wild' code investigated will be on the Android platform. The movement of cryptography to abstraction is especially important in Android which has a heavy focus on third-party development and ease-ofdevelopment. In the standard Android implementation of HTTPS, there are three parts in the creation of a secure con-nection with SSL/TLS. These three parts, setup, socket generation, and the certificate management, reflect the typical TLS handshake protocol BIB002 . The following is a possible SSL implementation. Setup involves customizing the HTTP packet headers. This can be done through HttpParams and ClientConnectionManager to transmit the proper headers and data. Cipher suites can be manually selected, but defaults will function for most calls. The socket is generated through an instance of the SSLSocketFactory class. Finally, the X509TrustManager which is an entity within the SSLSocketFactory will by default authenticate credentials and certificates. The stock Android trust manager has 134 root certificate authorities installed [30] . The library will attempt to trace the certificate trust chain back to one of these 134 root CAs. Once the client and server are certified, transmission commences. This can be an incredibly simple and blackboxed process. For instance, according to the Android Developer Training, valid HTTPS code can be written in four lines using HttpsURLConnection, part of the URLConnection library (see Listing 1) [12, 31] . Assuming that the device had the proper certificates installed, this code in lst. 1 would be operational. The URLConnection API takes care of hostname verification and certificate management. Besides the URLConnection API, other libraries and middleware have been developed for application designers which manage these components. More customization is available in the Java Secure Socket Extension (JSSE) which comes packaged with Java [32] . Other common libraries include OpenSSL [11] and GnuTLS [33] : C-based frameworks for SSL/TLS implementation. Higher-level wrapper implementations of these SSL/TLS libraries include cURL [34] and Apache HttpClient [35] . Furthermore, certain industries have their own middleware such as Amazons Flexible Payment Service [36] which help abstract the HTTPS connection code away from the developer. While libraries are an attempt to make SSL/TLS implementation default, they can also leave the applications vulnerable. Since application security is completely tied to the libraries it uses, flaws in the libraries are in extension flaws in the applications which use them. Giorgiev et al. BIB004 found that SSL certificate validation is completely broken in many critical software applications and libraries. In one example, Chase Mobile Banking overrides X509TrustManager and doesn't check the server's certificate thus violating the most important aspect of HTTPS. Furthermore, Tendulkar et al. found that during the investigation of 26 open-source applications, 10 were using SSL incorrectly. This is perhaps due to misreading library documentation or overriding important features of the suites. Even large-scale enterprises misuse HTTPS or don't fully secure their connections BIB003 . A vulnerability note released by CERT identifies applications by Fandango and Credit Karma which fail to validate SSL certificates . The issues within Android are more complex than a lack of experience with application construction, they derive from issues in libraries, protocols, server configurations, and user comprehension of SSL and TLS. Fahl et al. BIB003 developed a tool called MalloDroid which targets application vulnerabilities dealing with MITM attacks. Mallo-Droid analyzed the API calls which applications made, checked the validity of certificates, and identified cases of custom HTTPS implementation. Of the applications tested, 8% were vulnerable. The main issues discovered in this investigation were symptoms of unnecessary customizations placed over default SSL code. The use of customized code over SSL defaults is almost always detrimental BIB006 . Tendulkar et al. found that 1613 out of 1742 implementations of SSL with custom code did not require anything beyond the defaults. In fact, in most cases, adding the single character 's' would have allowed the application to securely use HTTPS. The primary customizations at fault were trust managers which accept all certificates, trust all hostnames, and ignore SSL errors BIB003 . Trust managers exist to validate certificates. When the certificate checking is turned off, security is compromised. Using user-defined trust managers that accept all certificates or self-signed certificates has been shown to be an issue in the Android community. It places user data in a vulnerable position and compromises the original intention of both SSL libraries and the SSL/TLS protocol. Unfortunately, trusting all hostnames is even simpler than implementing a custom trust manager. Using the org.apache.http.conn.ssl.AllowAll HostnameVerifier, developers are able to bypass checking the server for a certified hostname. Several applications investigated with MalloDroid contained custom classes which allowed all hostnames in the SSL connection BIB003 . This implementation subverts the fundamental trust process of SSL/TLS. Many mobile applications have been found to simply ignore the errors thrown by Android or a corresponding library which could not validate the HTTPS certificate. As seen in Listing 2, messages are hidden from users and the application continues as though it has a secure connection BIB003 . Again, overriding errors thrown by the system defeats its purpose and mimics the insecure manner in which users click through SSL errors in the browser. However, unlike in desktop browsers, this case comes with the repercussion of never presenting the user with options for their own security. One final issue that doesn't revolve around the customization of default SSL code is that developers sometimes use hybrid HTTP/HTTPS or don't use SSL/TLS at all. Fahl et al. BIB003 found an instant messenger application which sent login credentials over non-encrypted channels vulnerable to a replay attack. Other hybrid systems were vulnerable to stripping attacks or leaking data through broken SSL channels. Browsers and applications using Android's WebView to connect to a server are particularly vulnerable in these cases. These instances warrant attention from both developers and server architects. Beyond application-level flaws, there are widespread server misconfigurations which lead to a large number of false positive SSL errors BIB007 . These false-positives take up user attention and lead to an unsafe dismissal of SSL error validity by developers and users. Certificate management is often a difficult and paperwork-intensive process for server operations teams. In addition, content delivery networks (CDNs), and more specifically CNAME routing, have complicated the certificate issuance and validation process. Since the CDN model is based off surrogate servers handling web traffic load from the customer's server, using HTTPS properly, an intimate client-server model, requires less-than-ideal workarounds to maintain non-repudiation and trust. During the investigation of 20 CDN providers and 10,721 websites by Liang et al. BIB008 , 15% raised invalid certificate errors. All 5 CDNs investigated had insecure HTTPS or HTTP communication on the back-end. Due to many Android applications reliance on servers which use CDNs, this issue needs to be resolved in order for efforts of client-side validation and error reporting to be accurate and attune to SSL errors. Each of these vulnerabilities identified are not just issues waiting to be exploited. Huang et al. BIB009 showed that 2% of certificates / / = = = Listing 1 Example of a standard Android HTTPS call. Listing 2 Overridden SSLSocketFactory found in the wild. Survey on HTTPS implementation by Android apps which users received when accessing Facebook were forged. These false certificates were invalid and should have been cast away, but users still followed them or the application which accessed the website did not throw an error, thus falling victim to a man-in-the-middle attack. As Moxie Marlinspike has shown with his tool sslsniff [42] , automated MITM attacks are simple to carry out. The susceptibility of the physical layer of mobile communication to eavesdroppers only raises this risk. Android HTTPS development is in a bind. While developers want to have a secure system for their users, several factors within and without their control complicate proper implementation of end-to-end encrypted communication. Why aren't Android developers using HTTPS? Why do existing SSL implementations remain insecure? In the next section, we will analyze factors which have been identified in the current Android development process and SSL/TLS ecosystem which keep HTTPS from reaching the ideal security that it is often claimed to be.
|
A Survey on HTTPS Implementation by Android Apps: Issues and Countermeasures <s> Developer misuse of HTTPS <s> A large number of software security vulnerabilities are caused by software errors that are committed by software developers. We believe that interactive tool support will play an important role in aiding software developers to develop more secure software. However, an in-depth understanding of how and why software developers produce security bugs is needed to design such tools. We conducted a semi-structured interview study on 15 professional software developers to understand their perceptions and behaviors related to software security. Our results reveal a disconnect between developers' conceptual understanding of security and their attitudes regarding their personal responsibility and practices for software security. <s> BIB001 </s> A Survey on HTTPS Implementation by Android Apps: Issues and Countermeasures <s> Developer misuse of HTTPS <s> Many Android apps have a legitimate need to communicate over the Internet and are then responsible for protecting potentially sensitive data during transit. This paper seeks to better understand the potential security threats posed by benign Android apps that use the SSL/TLS protocols to protect data they transmit. Since the lack of visual security indicators for SSL/TLS usage and the inadequate use of SSL/TLS can be exploited to launch Man-in-the-Middle (MITM) attacks, an analysis of 13,500 popular free apps downloaded from Google's Play Market is presented. We introduce MalloDroid, a tool to detect potential vulnerability against MITM attacks. Our analysis revealed that 1,074 (8.0%) of the apps examined contain SSL/TLS code that is potentially vulnerable to MITM attacks. Various forms of SSL/TLS misuse were discovered during a further manual audit of 100 selected apps that allowed us to successfully launch MITM attacks against 41 apps and gather a large variety of sensitive data. Furthermore, an online survey was conducted to evaluate users' perceptions of certificate warnings and HTTPS visual security indicators in Android's browser, showing that half of the 754 participating users were not able to correctly judge whether their browser session was protected by SSL/TLS or not. We conclude by considering the implications of these findings and discuss several countermeasures with which these problems could be alleviated. <s> BIB002 </s> A Survey on HTTPS Implementation by Android Apps: Issues and Countermeasures <s> Developer misuse of HTTPS <s> The Secure Sockets Layer (SSL) is widely used to secure data transfers on the Internet. Previous studies have shown that the state of non-browser SSL code is catastrophic across a large variety of desktop applications and libraries as well as a large selection of Android apps, leaving users vulnerable to Man-in-the-Middle attacks (MITMAs). To determine possible causes of SSL problems on all major appified platforms, we extended the analysis to the walled-garden ecosystem of iOS, analyzed software developer forums and conducted interviews with developers of vulnerable apps. Our results show that the root causes are not simply careless developers, but also limitations and issues of the current SSL development paradigm. Based on our findings, we derive a proposal to rethink the handling of SSL in the appified world and present a set of countermeasures to improve the handling of SSL using Android as a blueprint for other platforms. Our countermeasures prevent developers from willfully or accidentally breaking SSL certificate validation, offer support for extended features such as SSL Pinning and different SSL validation infrastructures, and protect users. We evaluated our solution against 13,500 popular Android apps and conducted developer interviews to judge the acceptance of our approach and found that our solution works well for all investigated apps and developers. <s> BIB003 </s> A Survey on HTTPS Implementation by Android Apps: Issues and Countermeasures <s> Developer misuse of HTTPS <s> The use of secure HTTP calls is a first and critical step toward securing the Android application data when the app interacts with the Internet. However, one of the major causes for the unencrypted communication is app developer's errors or ignorance. Could the paradigm of literally repetitive and ineffective emphasis shift towards emphasis as a mechanism? This paper introduces emphaSSL, a simple, practical and readily-deployable way to harden networking security in Android applications. Our emphaSSL could guide app developer's security development decisions via real-time feedback, informative warnings and suggestions. At its core of emphaSSL, we use a set of rigorous security rules, which are obtained through an in-depth SSL/TLS security analysis based on security requirements engineering techniques. We implement emphaSSL via the PMD and evaluate it against 75 open- source Android applications. Our results show that emphaSSL is effective at detecting security violations in HTTPS calls with a very low false positive rate, around 2%. Furthermore, we identified 164 substantial SSL mistakes in these testing apps, 40% of which are potentially vulnerable to man-in-the-middle attacks. In each of these instances, the vulnerabilities could be quickly resolved with the assistance of our highlighting messages in emphaSSL. Upon notifying developers of our findings in their applications, we received positive responses and interest in this approach. <s> BIB004
|
Among the papers reviewed, the most commonly reported flaws in HTTPS configuration were due developer negligence. One such problem is debug code being left in production applications. this problem isn't new and it has been listed in the Common Weakness Enumeration . Leftover code and snippets that bypass standard procedures to make the an app operational in development have a widespread effect on application security . Ironically, leftover debug code can violate the protections which the system it models is supposed to afford. HTTPS is not immune to development glitches where the author of a program either leaves vulnerable code or places an intentional override in their application, especially on Android. This could come in the form of a situation where, in order for an application to populate and display data for the developer, the certificate validation must be set up to allow a stream of data from a mock server. This allows for the author to assure the other components of the application are properly functioning, but leaves the HTTPS connection vul-nerable unless the certificate checking is turned back on. This happens with unfortunate frequency since there are simple mechanisms which we will later discuss to prevent unintentional remnant debug code from emerging in production applications. Developers want top level security, but also desire their product to function properly in development BIB002 . This creates an issue with a complex setup like SSL/TLS. As explained earlier, the most crucial part of an HTTPS communication paradigm is the existence of valid certificates and recognized certificate authorities. When running both the server and the application, developers may build their server with self-signed certificates for development and forget to change the application's validation process when they do get the proper CA-signed certificate or bypass this for alternative reasons BIB003 BIB004 . Running an application without SSL/TLS protection while debugging is obviously harmless, but once these apps are open to the general public, there is extreme risk of data theft. Beyond inspecting the code itself, speaking to developers about their mistakes and security bugs yields a more thorough look into the cause of these developer-based flaws. A study conducted by Fahl et al. BIB002 showed a few trends among the developers surveyed. (i) Developers make mistakes. Upon contacting the developers at fault, many took the advice and fixed their mistakes. Others, however, refused to admit that the flaw was an issue BIB003 . These mistakes are understandable. Android is a complex system and public-key cryptography is not a easily grasped even with high-level libraries. The startling rejection and denial made by developers in this survey may be a result of embarrassment at incorrectly implementing code. However, for applications made by developers both willing and unwilling to admit fault for SSL misconfiguration, it seems apparent that there was a failing in code coverage in the development process. (ii) Another explanation may be apathy or simple ignorance on the topic of SSL/TLS security. A paper by Xie et al. BIB001 found that while many of the participants in their experiment had a general knowledge and awareness of software security, there were gaps between this knowledge and the actual practices and behaviors that their participants reported. Despite general knowledge of security, they were not able to give concrete examples of their personal security practices. In the same study, Xie et al. noted that there was a prevalence of the ''it's not my responsibility" attitude. The developers often relied on other people, processes, or technology to handle the security side of the application. When these software authors are so busy with the pure functionality and viability of their product and the approaching deadlines, it is obvious that the security hat looks much better on another member of the team. Unfortunately, code review and quality assurance only go so far, especially when looking at an application retrospectively. In an ideal situation, security is considered in every step of the development process from the design through the deployment. As evidenced by this report, this is not the case in many development environments. (iii) Online forums and user-to-user resources may not be the cause of developer misuse of SSL, but they allow developers to discover ways to bypass security measures in order to solve errors. One such website is Stack Overflow [46] . Typically, errors solved are caused when the developer who has posed the question has incorrectly written a chunk of code. In these situations, Stack Overflow operates in an important, positive way. However, in the case of SSL errors, the most trivial way to stop the errors without configuring the server is to stop the application from throwing the errors. Figs. 3 and 4 show an example of an Android SSL certificate expiration override. While most respondents explain that these solu- Survey on HTTPS implementation by Android apps tions should not be used in production environments before giving a sample override, some answers, such as the one shown in Fig. 4 , do not provide that context. This answer has received negative feedback most likely for this reason. However, given almost 10,000 views, this solution has almost certainly ended up in a developer's production application. Thoughtful answers are often mixed with less security-oriented responses on these websites, allowing harmful programming paradigms to develop online.For instance, a developer may ask for a way to get past the UntrustedCertificate error in Apache's HttpClient and the answer may be to use a custom SSLSocketFactory to trust all hosts . Of course, those who answered the question or other community members may stress how this should not run in production, but the solution is still presented in a fashion that a desperate developer can quickly find a work-around. Websites such as Stack Overflow don't encourage app designers to customize their SSL/TLS implementation to use self-signed certificates and accept all hosts in production, but they do show developers how to use them in testing BIB003 . Stackoverflow cannot be blamed nor can the open-door style of development which Android possesses. The fault in these situations are the developers who either forget to remove the work-around code or just ignore the warnings on using accept-all policies in production applications. Despite the many flaws which can be found in Android development and production applications, there is no solid evidence that Android developers are more clumsy with SSL than others in a similar situation. Investigations of iOS applications has shown that the two platforms have a comparable number of SSL/TLS vulnerabilities BIB003 . The so-called walled garden approach doesn't seem to fix issues in developer misuse of HTTPS. While it may make sense to correlate a lack of developer knowledge with an incorrect SSL connection, it would be incorrect to say that the Android-specific development paradigm causes these errors. If anything can be found to be lacking, it is a lack of oversight on mobile applications. Another, more social, factor may contribute to these developer mistakes and in turn effect the security of HTTPS calls. Xie et al. BIB001 show that there are issues in developer environments (team members, support staff, managers, etc.) that can cause them to make mistakes. One such issue is misplaced trust in process. This involves believing that software security is only retrospective or investigated in the code review stages. Secondly, there is the feeling that a software engineer doesn't need to know about vulnerabilities if they aren't specifically working on them. This would be like designing a backend without paying any attention to the frontend. Software isn't contextual, and all components in the final project need to be designed, developed, and reviewed at every step in the process. Each member of a team should be aware of what the others are doing in order to create the most accurate and unified product. Finally, and most recognizably to developers, is the existence of external constraints which effect workflow and programming process on a human level. These include deadlines, client desires, government policy, and any sort of confining elements that would stop the developer from creating the product in the way he or she imagines. When the budget tightens or a dead-line approaches, proper security can be an unfortunate sacrifice when a client's main focus is functionality and design BIB001 . Besides a missing understanding of HTTPS standards, these external constraints potentially hold the most sway over the correctness of a developer's solution. Developers are faced with pressure, deadlines, an imperfect support system, and the complexities of public key infrastructure and Android. Mistakes and misconfigurations are bound to arise in this system. When user data must rely on this stressed authorship, there are serious implications. While applications created by these developers are the breaking-point in this system, there are several more causes both for developer mistakes and general insecurities in Android SSL connections.
|
A Survey on HTTPS Implementation by Android Apps: Issues and Countermeasures <s> Server misconfigurations <s> The Secure Socket Layer (SSL) and its variant, Transport Layer Security (TLS), are used toward ensuring server security. In this paper, we characterize the cryptographic strength of public servers running SSL/TLS. We present a tool developed for this purpose, the Probing SSL Security Tool (PSST), and evaluate over 19,000 servers. We expose the great diversity in the levels of cryptographic strength that is supported on the Internet. Some of our discouraging results show that most sites still support the insecure SSL 2.0, weak export-level grades of encryption ciphers, or weak RSA key strengths. We also observe encouraging behavior such as sensible default choices by servers when presented with multiple options, the quick adoption of AES (more than half the servers support strong key AES as their default choice), and the use of strong RSA key sizes of 1024 bits and above. Comparing results of running our tool over the last two years points to a positive trend that is moving in the right direction, though perhaps not as quickly as it should. <s> BIB001 </s> A Survey on HTTPS Implementation by Android Apps: Issues and Countermeasures <s> Server misconfigurations <s> HTTPS is the de facto standard for securing Internet communications. Although it is widely deployed, the security provided with HTTPS in practice is dubious. HTTPS may fail to provide security for multiple reasons, mostly due to certificate-based authentication failures. Given the importance of HTTPS, we investigate the current scale and practices of HTTPS and certificate-based deployment. We provide a large-scale empirical analysis that considers the top one million most popular websites. Our results show that very few websites implement certificate-based authentication properly. In most cases, domain mismatches between certificates and websites are observed. We study the economic, legal and social aspects of the problem. We identify causes and implications of the profit-oriented attitude of CAs and show how the current economic model leads to the distribution of cheap certificates for cheap security. Finally, we suggest possible changes to improve certificate-based authentication. <s> BIB002 </s> A Survey on HTTPS Implementation by Android Apps: Issues and Countermeasures <s> Server misconfigurations <s> Over the years, SSL/TLS has become an essential part of internet security. As such, it should offer robust and state-of-the-art security, in particular for HTTPS, its first application. Theoretically, the protocol allows for a trade-off between secure algorithms and decent performance. Yet in practice, servers do not always support the latest version of the protocol, nor do they all enforce strong cryptographic algorithms. To assess the quality of HTTPS servers in the wild, we enumerated HTTPS servers on the internet in July 2010 and July 2011. We sent several stimuli to the servers to gather detailed information. We then analysed some parameters of the collected data and looked at how they evolved. We also focused on two subsets of TLS hosts within our measure: the trusted hosts (possessing a valid certificate at the time of the probing) and the EV hosts (presenting a trusted, so-called Extended Validation certificate). Our contributions rely on this methodology: the stimuli we sent, the criteria we studied and the subsets we focused on. Moreover, even if EV servers present a somewhat improved certificate quality over the TLS hosts, we show they do not offer overall high quality sessions, which could and should be improved. <s> BIB003 </s> A Survey on HTTPS Implementation by Android Apps: Issues and Countermeasures <s> Server misconfigurations <s> When browsers report TLS errors, they cannot distinguish between attacks and harmless server misconfigurations; hence they leave it to the user to decide whether continuing is safe. However, actual attacks remain rare. As a result, users quickly become used to "false positives" that deplete their attention span, making it unlikely that they will pay sufficient scrutiny when a real attack comes along. Consequently, browser vendors should aim to minimize the number of low-risk warnings they report. To guide that process, we perform a large-scale measurement study of common TLS warnings. Using a set of passive network monitors located at different sites, we identify the prevalence of warnings for a total population of about 300,000 users over a nine-month period. We identify low-risk scenarios that consume a large chunk of the user attention budget and make concrete recommendations to browser vendors that will help maintain user attention in high-risk situations. We study the impact on end users with a data set much larger in scale than the data sets used in previous TLS measurement studies. A key novelty of our approach involves the use of internal browser code instead of generic TLS libraries for analysis, providing more accurate and representative results. <s> BIB004 </s> A Survey on HTTPS Implementation by Android Apps: Issues and Countermeasures <s> Server misconfigurations <s> The Heartbleed vulnerability took the Internet by surprise in April 2014. The vulnerability, one of the most consequential since the advent of the commercial Internet, allowed attackers to remotely read protected memory from an estimated 24--55% of popular HTTPS sites. In this work, we perform a comprehensive, measurement-based analysis of the vulnerability's impact, including (1) tracking the vulnerable population, (2) monitoring patching behavior over time, (3) assessing the impact on the HTTPS certificate ecosystem, and (4) exposing real attacks that attempted to exploit the bug. Furthermore, we conduct a large-scale vulnerability notification experiment involving 150,000 hosts and observe a nearly 50% increase in patching by notified hosts. Drawing upon these analyses, we discuss what went well and what went poorly, in an effort to understand how the technical community can respond more effectively to such events in the future. <s> BIB005 </s> A Survey on HTTPS Implementation by Android Apps: Issues and Countermeasures <s> Server misconfigurations <s> In this paper, we propose stochastic fingerprints for application traffic flows conveyed in Secure Socket Layer/Transport Layer Security (SSL/TLS) sessions. The fin- gerprints are based on first-order homogeneous Markov chains for which we identify the parameters from observed training application traces. As the fingerprint parameters of chosen applications considerably differ, the method results in a very good accuracy of application discrimination and provides a possibility of detecting abnormal SSL/TLS sessions. Our analysis of the results reveals that obtaining application discrimination mainly comes from incorrect implementation practice, the misuse of the SSL/TLS protocol, various server configurations, and the application nature. fingerprints of sessions to classify application traffic. We call a fingerprint any distinctive feature allowing identification of a given traffic class. In this work, a fingerprint corresponds to a first-order homogeneous Markov chain reflecting the dynamics of an SSL/TLS session. The Markov chain states model a sequence of SSL/TLS message types appearing in a single direction flow of a given application from a server to a client. We have studied the Markov chain fingerprints for twelve representative applications that make use of SSL/TLS: PayPal (an electronic service allowing online payments and money transfers), Twitter (an online social networking and micro- blogging service), Dropbox (a file hosting service), Gadu- Gadu (a popular Polish instant messenger), Mozilla (a part of Mozilla add-ons service responsible for verification of the software version), MBank and PKO (two popular European online banking services), Dziekanat (student online service), Poczta (student online mail service), Amazon S3 (a Simple Storage Service) and EC2 (an Elastic Compute Cloud), and Skype (a VoIP service). The resulting models exhibit a specific structure allowing to classify encrypted application flows by comparing its message sequences with fingerprints. They can also serve to reveal intrusions trying to exploit the SSL/TLS protocol by establishing abnormal communications with a server. <s> BIB006 </s> A Survey on HTTPS Implementation by Android Apps: Issues and Countermeasures <s> Server misconfigurations <s> Content Delivery Network (CDN) and Hypertext Transfer Protocol Secure (HTTPS) are two popular but independent web technologies, each of which has been well studied individually and independently. This paper provides a systematic study on how these two work together. We examined 20 popular CDN providers and 10,721 of their customer web sites using HTTPS. Our study reveals various problems with the current HTTPS practice adopted by CDN providers, such as widespread use of invalid certificates, private key sharing, neglected revocation of stale certificates, and insecure back-end communication. While some of those problems are operational issues only, others are rooted in the fundamental semantic conflict between the end-to-end nature of HTTPS and the man-in-the-middle nature of CDN involving multiple parties in a delegated service. To address the delegation problem when HTTPS meets CDN, we proposed and implemented a lightweight solution based on DANE (DNS-based Authentication of Named Entities), an emerging IETF protocol complementing the current Web PKI model. Our implementation demonstrates that it is feasible for HTTPS to work with CDN securely and efficiently. This paper intends to provide a context for future discussion within security and CDN community on more preferable solutions. <s> BIB007 </s> A Survey on HTTPS Implementation by Android Apps: Issues and Countermeasures <s> Server misconfigurations <s> A properly managed public key infrastructure (PKI) is critical to ensure secure communication on the Internet. Surprisingly, some of the most important administrative steps---in particular, reissuing new X.509 certificates and revoking old ones---are manual and remained unstudied, largely because it is difficult to measure these manual processes at scale. ::: ::: We use Heartbleed, a widespread OpenSSL vulnerability from 2014, as a natural experiment to determine whether administrators are properly managing their certificates. All domains affected by Heartbleed should have patched their software, revoked their old (possibly compromised) certificates, and reissued new ones, all as quickly as possible. We find the reality to be far from the ideal: over 73% of vulnerable certificates were not reissued and over 87% were not revoked three weeks after Heartbleed was disclosed. Our results also show a drastic decline in revocations on the weekends, even immediately following the Heartbleed announcement. These results are an important step in understanding the manual processes on which users rely for secure, authenticated communication. <s> BIB008
|
On the opposite end of the TLS system is the HTTPS server. Setting up an Apache HTTPS server is not difficult . In addition, security for these servers can be configured to be much higher with ease [49] . Despite this, only 45% of the Top 1 Million websites support HTTPS BIB005 . Furthermore, the systems which do operate on HTTPS can have flaws which can completely compromise the security of SSL. Korczynski et al. BIB006 discovered that even in a relatively small set of Internet services, certain elements of the TLS protocol were being ignored or misused. These heavily trafficked websites receive significant amounts of traffic and financial transactions, making it imperative for stronger end-to-end implementations of TLS. SSL server probing BIB003 BIB001 has shown an upward trend in positive TLS implementation and healthy cipher use, however the growing reliance on encrypted data flows has made tight adherence to protocols on the server-side fundamental to effective security throughout the full Internet domain. A frequent mistake made by HTTPS servers is the use of self-signed certificates. Self-signed certificates, certificates which have no authority to back up their validity, work well in testing situations, but when a server needs accept requests from the public Internet, these false certificates are unsafe. In these cases, a signed certificate from a certificate authority must be acquired or purchased. These servers will treat Android traffic the same way as any traffic and cause the pitfalls for mobile traffic just as they do for desktop clients. Indeed, the most frequent issue with server configuration is the mishandling of certificate installation BIB004 . Certificate management isn't an automated process. After applying for a certificate from a certificate authority, that certificate is sent by email to the company which sent in the request. This certificate must then be manually installed in order for clients to believe that the server is in fact correct. When certificates expire following their two or three year lifespan, a smooth transition to a new certificate must be carried out in order to assure maximum uptime. Vratonjic et al. BIB002 found that among many other violations, 82.4% of servers investigated used expired or otherwise invalid certificates. Again, in the days following the Heartbleed bug, only 10% of vulnerable servers replaced their potentially compromised certificates. Of this 10%, another 14% reused the same private key which may have leaked BIB005 BIB008 . These cases demonstrate the difficulty that system operators have with healthy use of certificates. Indeed, the prevalence of these incorrectly implemented certificates has a direct effect on developers and the services that rely on secure Internet connec-tions. If developers can't connect to a server outside their control due to an SSL error, the only course of action would be to lower the validation parameters on their application. Both ends of the SSL connection need to maintain the highest level of security. In order to reach adoption by developers on all platforms, the system must display a reasonable level of consistent functionality. Certificate management is a tricky and complicated aspect of SSL which needs further research, tools, and perspectives to be introduced before it can reach a realistically reliable state. New technologies in the burgeoning operations world make certificate management even trickier. Content Delivery Networks (CDNs) are distributed server farms which spread out the load on large, public websites. The servers of the CDN act as surrogates for the main web server, stepping in the middle of a direct client-server relationship. This middle-man server must be trusted by the server, but any current method of doing this violates the SSL protocol BIB007 . Innovative methodologies must be contributed to the X.509 protocol and the certificate authority industry to meet the challenge of scaling websites and an ever-increasing pool of vital websites that require certificates to be properly installed. Until servers are properly secured, the security of all client applications will suffer. Developers will be wary of using the protocol and the default Internet connection methodology on Android will not be HTTPS until it is as easy to implement as cleartext HTTP.
|
A Survey on HTTPS Implementation by Android Apps: Issues and Countermeasures <s> Lacking documentation on HTTPS <s> A large number of software security vulnerabilities are caused by software errors that are committed by software developers. We believe that interactive tool support will play an important role in aiding software developers to develop more secure software. However, an in-depth understanding of how and why software developers produce security bugs is needed to design such tools. We conducted a semi-structured interview study on 15 professional software developers to understand their perceptions and behaviors related to software security. Our results reveal a disconnect between developers' conceptual understanding of security and their attitudes regarding their personal responsibility and practices for software security. <s> BIB001 </s> A Survey on HTTPS Implementation by Android Apps: Issues and Countermeasures <s> Lacking documentation on HTTPS <s> SSL (Secure Sockets Layer) is the de facto standard for secure Internet communications. Security of SSL connections against an active network attacker depends on correctly validating public-key certificates presented when the connection is established. We demonstrate that SSL certificate validation is completely broken in many security-critical applications and libraries. Vulnerable software includes Amazon's EC2 Java library and all cloud clients based on it; Amazon's and PayPal's merchant SDKs responsible for transmitting payment details from e-commerce sites to payment gateways; integrated shopping carts such as osCommerce, ZenCart, Ubercart, and PrestaShop; AdMob code used by mobile websites; Chase mobile banking and several other Android apps and libraries; Java Web-services middleware including Apache Axis, Axis 2, Codehaus XFire, and Pusher library for Android and all applications employing this middleware. Any SSL connection from any of these programs is insecure against a man-in-the-middle attack. The root causes of these vulnerabilities are badly designed APIs of SSL implementations (such as JSSE, OpenSSL, and GnuTLS) and data-transport libraries (such as cURL) which present developers with a confusing array of settings and options. We analyze perils and pitfalls of SSL certificate validation in software based on these APIs and present our recommendations. <s> BIB002
|
Beyond the physical limitations of an SSL connection, one of the problems which developers face is a lack of proper documentation and a foundation in the importance of application security. There is very little research of interactive support to developers for secure software development BIB001 . This information is critical to expose developers to correct methodologies and point them in the way of secure Internet connection creation. While the Android platform prides itself on ease of use, it can be surprisingly confusing. For instance, manual analysis of financial applications with vulnerabilities in their inter-app communication yielded the conclusion that several flaws were caused due to developer confusion over Android's complexities . The authors stated that they believe that these errors, made by security-conscious developers, are indicative of the fact that Android's Intent system is confusing and tricky to use securely. This subject, completely separate from SSL/TLS in terms of purpose and architecture, has shown that Android is at its core a complex system that is difficult to comprehend from a front-end developer standpoint. Existing documentation and tutorials are not reaching their audiences effectively. Approaching the issue of 'complexity' isn't an endeavor that can happen with a single update. However, in order to further the security and proper development practices of Android applications, the maintainers of the operating system must work toward abstracting the complexities or putting out better documentation. Further research must go into the psychology behind technical documentation comprehension, particularly for Android. One such example of inadequate training is also the most critical. The Android developer training on SSL/TLS is sorely lacking in proper examples and implementation. The training on security is near the bottom of the screen and listed below trainings on user interface and per-formance [12] . There is a minimal explanation of the protocol or public key cryptography in general. A lack of solid documentation in popular SSL/TLS libraries also presents an issue BIB002 . The OpenSSL library documentation [11] is a meaty webpage that can be rather intimidating. It may be that the quick code snippets of StackOverflow are much more appealing. In several prominent libraries, there are examples of generally confusing APIs. This will be discussed in the next subsection. In order to fulfill their role in the implementation of SSL, libraries must create documentation for developers who are not cryptography experts. Major security-breaching methods like AllowAllHostnameVerifier should be documented as being for testing purposes only [57] . Finally, there are general barriers in coding that need to be broken down in order to allow developers to properly build secure programming principles into their products. Research by Ko et al. has presented findings on the elements of programming environments which prevent problem solving. The primary takeaway from this study is that there is a minimal error reporting infrastructure in many major IDEs and programming language compilers. There are invisible rules that seem to exist without much documentation and differences in programming interfaces interfere with the natural flow of problem solving. Not only do libraries need to be more informative, but application development tools should be smart enough to identify security flaws or inform developers of best practices. A solid documentation source would be responsive to user confusion and effective in communicating the most simple, but secure solution.
|
A Survey on HTTPS Implementation by Android Apps: Issues and Countermeasures <s> Flaws in SSL/TLS libraries <s> In this article we discuss a programming model to establish a secure communication channel by using HTTPS protocol in Android platform, using some public key infrastructure features like public keys and digital certificates. <s> BIB001 </s> A Survey on HTTPS Implementation by Android Apps: Issues and Countermeasures <s> Flaws in SSL/TLS libraries <s> SSL (Secure Sockets Layer) is the de facto standard for secure Internet communications. Security of SSL connections against an active network attacker depends on correctly validating public-key certificates presented when the connection is established. We demonstrate that SSL certificate validation is completely broken in many security-critical applications and libraries. Vulnerable software includes Amazon's EC2 Java library and all cloud clients based on it; Amazon's and PayPal's merchant SDKs responsible for transmitting payment details from e-commerce sites to payment gateways; integrated shopping carts such as osCommerce, ZenCart, Ubercart, and PrestaShop; AdMob code used by mobile websites; Chase mobile banking and several other Android apps and libraries; Java Web-services middleware including Apache Axis, Axis 2, Codehaus XFire, and Pusher library for Android and all applications employing this middleware. Any SSL connection from any of these programs is insecure against a man-in-the-middle attack. The root causes of these vulnerabilities are badly designed APIs of SSL implementations (such as JSSE, OpenSSL, and GnuTLS) and data-transport libraries (such as cURL) which present developers with a confusing array of settings and options. We analyze perils and pitfalls of SSL certificate validation in software based on these APIs and present our recommendations. <s> BIB002 </s> A Survey on HTTPS Implementation by Android Apps: Issues and Countermeasures <s> Flaws in SSL/TLS libraries <s> The Secure Sockets Layer (SSL) is widely used to secure data transfers on the Internet. Previous studies have shown that the state of non-browser SSL code is catastrophic across a large variety of desktop applications and libraries as well as a large selection of Android apps, leaving users vulnerable to Man-in-the-Middle attacks (MITMAs). To determine possible causes of SSL problems on all major appified platforms, we extended the analysis to the walled-garden ecosystem of iOS, analyzed software developer forums and conducted interviews with developers of vulnerable apps. Our results show that the root causes are not simply careless developers, but also limitations and issues of the current SSL development paradigm. Based on our findings, we derive a proposal to rethink the handling of SSL in the appified world and present a set of countermeasures to improve the handling of SSL using Android as a blueprint for other platforms. Our countermeasures prevent developers from willfully or accidentally breaking SSL certificate validation, offer support for extended features such as SSL Pinning and different SSL validation infrastructures, and protect users. We evaluated our solution against 13,500 popular Android apps and conducted developer interviews to judge the acceptance of our approach and found that our solution works well for all investigated apps and developers. <s> BIB003 </s> A Survey on HTTPS Implementation by Android Apps: Issues and Countermeasures <s> Flaws in SSL/TLS libraries <s> TLS was designed as a transparent channel abstraction to allow developers with no cryptographic expertise to protect their application against attackers that may control some clients, some servers, and may have the capability to tamper with network connections. However, the security guarantees of TLS fall short of those of a secure channel, leading to a variety of attacks. We show how some widespread false beliefs about these guarantees can be exploited to attack popular applications and defeat several standard authentication methods that rely too naively on TLS. We present new client impersonation attacks against TLS renegotiations, wireless networks, challenge-response protocols, and channel-bound cookies. Our attacks exploit combinations of RSA and Diffie-Hellman key exchange, session resumption, and renegotiation to bypass many recent countermeasures. We also demonstrate new ways to exploit known weaknesses of HTTP over TLS. We investigate the root causes for these attacks and propose new countermeasures. At the protocol level, we design and implement two new TLS extensions that strengthen the authentication guarantees of the handshake. At the application level, we develop an exemplary HTTPS client library that implements several mitigations, on top of a previously verified TLS implementation, and verify that their composition provides strong, simple application security. <s> BIB004 </s> A Survey on HTTPS Implementation by Android Apps: Issues and Countermeasures <s> Flaws in SSL/TLS libraries <s> Modern network security rests on the Secure Sockets Layer (SSL) and Transport Layer Security (TLS) protocols. Distributed systems, mobile and desktop applications, embedded devices, and all of secure Web rely on SSL/TLS for protection against network attacks. This protection critically depends on whether SSL/TLS clients correctly validate X.509 certificates presented by servers during the SSL/TLS handshake protocol. We design, implement, and apply the first methodology for large-scale testing of certificate validation logic in SSL/TLS implementations. Our first ingredient is "frankencerts," synthetic certificates that are randomly mutated from parts of real certificates and thus include unusual combinations of extensions and constraints. Our second ingredient is differential testing: if one SSL/TLS implementation accepts a certificate while another rejects the same certificate, we use the discrepancy as an oracle for finding flaws in individual implementations. Differential testing with frankencerts uncovered 208 discrepancies between popular SSL/TLS implementations such as OpenSSL, NSS, CyaSSL, GnuTLS, PolarSSL, MatrixSSL, etc. Many of them are caused by serious security vulnerabilities. For example, any server with a valid X.509 version1 certificate can act as a rogue certificate authority and issue fake certificates for any domain, enabling man-in-the-middle attacks against MatrixSSL and GnuTLS. Several implementations also accept certificate authorities created by unauthorized issuers, as well as certificates not intended for server authentication. We also found serious vulnerabilities in how users are warned about certificate validation errors. When presented with an expired, self-signed certificate, NSS, Safari, and Chrome (on Linux) report that the certificate has expired - a low-risk, often ignored error - but not that the connection is insecure against a man-in-the-middle attack. These results demonstrate that automated adversarial testing with frankencerts is a powerful methodology for discovering security flaws in SSL/TLS implementations. <s> BIB005 </s> A Survey on HTTPS Implementation by Android Apps: Issues and Countermeasures <s> Flaws in SSL/TLS libraries <s> The Heartbleed vulnerability took the Internet by surprise in April 2014. The vulnerability, one of the most consequential since the advent of the commercial Internet, allowed attackers to remotely read protected memory from an estimated 24--55% of popular HTTPS sites. In this work, we perform a comprehensive, measurement-based analysis of the vulnerability's impact, including (1) tracking the vulnerable population, (2) monitoring patching behavior over time, (3) assessing the impact on the HTTPS certificate ecosystem, and (4) exposing real attacks that attempted to exploit the bug. Furthermore, we conduct a large-scale vulnerability notification experiment involving 150,000 hosts and observe a nearly 50% increase in patching by notified hosts. Drawing upon these analyses, we discuss what went well and what went poorly, in an effort to understand how the technical community can respond more effectively to such events in the future. <s> BIB006
|
The ideal Android HTTPS library would enable developers to use SSL correctly without coding effort and prevent them from breaking certificate validation through customization BIB003 . This would be a model where socket generation and administration of certification authorities are the only responsibilities assigned to programmer. It would bridge the gap between control facilities needed to establish HTTPS connections, making it unnecessary to involve programmers in the development of every essential interface in the already complex HTTPS environment BIB001 . Furthermore, the API should allow certain relaxed certificate validation when the application is being testing. Dozens of libraries and SSL/TLS abstraction frameworks exist to make HTTPS easier to use. Despite the goal of making the system more approachable, Cairns et al. and others have shown that major SSL/TLS libraries remain too complicated and low-level BIB004 . Fahl et al. claim that there is no solid library which provides easy SSL usage BIB003 . Indeed, it seems that frustration with APIs is the guiding factor behind developers resorting to StackOverflow to find work-arounds. Georgiev et al. conducted an investigation into critical applications which were compromised due to these flawed or poorly-written libraries BIB002 . The cURL library is one such confusing library. For example, Amazons Flexible Payments Service PHP library attempts to enable hostname verification by setting cURLs CURLOPT_SSL_VERIFYHOST parameter to true. Unfortunately, this is the wrong boolean to turn on hostname verification and thus the middleware and all applications using it are compromised. PayPals Payment library makes the same mistake. Not only cURL, but GnuTLS has a misleading gnutls_ certificate_verify_peers2 which leaves the Lynx text-based web browser vulnerable. Poorly worded APIs defeat the goal of libraries to make SSL easier to correctly implement. Combined with poor documentation, these libraries can be detrimental to a healthy public key infrastructure. Other problems were pointed out in the study by Georgiev et al. Validation was lacking and documentation was so scarce that users were led to misuse the suite. Error handling was different for each library. This sort of miscommunication between systems has lead developers to frequently use the incorrect SSL/ TLS libraries for their specific problem. For instance, python libraries urllib2 and httplib which do not support certificate checking, were used in applications hooking into PayPal and Twitter. The disconnect between end users and libraries can be bridged with better communication, documentation, and standards across libraries. Not only are SSL/TLS APIs found to be often confusing, but some contain their own programmatic holes. Apache Axis, which is used by big-name applications from PayPal and Amazon, implements Apache's HTTPClient. Axis uses the standard SSLSocketFactory, but omits hostname verification. Using the independent nature of various SSL libraries to compare reactions to certificates, Brubaker et al. found several holes in major open source libraries and browsers BIB005 . While efforts in finding flaws have encouraged library developers to patch their software, more oversight needs to go into these libraries which provide the backbone (and reputation) of the HTTPS ecosystem. Making APIs easier goes hand-in-hand with documentation clarification and developer education on security . As technology progresses and more of the Internet supports HTTPS connections, libraries will be forced to become more user friendly and standard. Issues like Heartbleed, which allowed attackers to sniff protected memory from approximately 25-50% of Alexa top 1 million HTTPS sites, while frightening, will encourage more scrutinizing eyes to fall on open source SSL libraries and the infrastructure which supports it BIB006 . Security researchers have called for more development on these critical open-source projects in order to protect the entirety of the HTTPS infrastructure. Android developers rely on these libraries and they must be firmly in place for developers to use.
|
A Survey on HTTPS Implementation by Android Apps: Issues and Countermeasures <s> Issues in the HTTPS protocol <s> HTTPS is the de facto standard for securing Internet communications. Although it is widely deployed, the security provided with HTTPS in practice is dubious. HTTPS may fail to provide security for multiple reasons, mostly due to certificate-based authentication failures. Given the importance of HTTPS, we investigate the current scale and practices of HTTPS and certificate-based deployment. We provide a large-scale empirical analysis that considers the top one million most popular websites. Our results show that very few websites implement certificate-based authentication properly. In most cases, domain mismatches between certificates and websites are observed. We study the economic, legal and social aspects of the problem. We identify causes and implications of the profit-oriented attitude of CAs and show how the current economic model leads to the distribution of cheap certificates for cheap security. Finally, we suggest possible changes to improve certificate-based authentication. <s> BIB001 </s> A Survey on HTTPS Implementation by Android Apps: Issues and Countermeasures <s> Issues in the HTTPS protocol <s> Many Android apps have a legitimate need to communicate over the Internet and are then responsible for protecting potentially sensitive data during transit. This paper seeks to better understand the potential security threats posed by benign Android apps that use the SSL/TLS protocols to protect data they transmit. Since the lack of visual security indicators for SSL/TLS usage and the inadequate use of SSL/TLS can be exploited to launch Man-in-the-Middle (MITM) attacks, an analysis of 13,500 popular free apps downloaded from Google's Play Market is presented. We introduce MalloDroid, a tool to detect potential vulnerability against MITM attacks. Our analysis revealed that 1,074 (8.0%) of the apps examined contain SSL/TLS code that is potentially vulnerable to MITM attacks. Various forms of SSL/TLS misuse were discovered during a further manual audit of 100 selected apps that allowed us to successfully launch MITM attacks against 41 apps and gather a large variety of sensitive data. Furthermore, an online survey was conducted to evaluate users' perceptions of certificate warnings and HTTPS visual security indicators in Android's browser, showing that half of the 754 participating users were not able to correctly judge whether their browser session was protected by SSL/TLS or not. We conclude by considering the implications of these findings and discuss several countermeasures with which these problems could be alleviated. <s> BIB002 </s> A Survey on HTTPS Implementation by Android Apps: Issues and Countermeasures <s> Issues in the HTTPS protocol <s> When browsers report TLS errors, they cannot distinguish between attacks and harmless server misconfigurations; hence they leave it to the user to decide whether continuing is safe. However, actual attacks remain rare. As a result, users quickly become used to "false positives" that deplete their attention span, making it unlikely that they will pay sufficient scrutiny when a real attack comes along. Consequently, browser vendors should aim to minimize the number of low-risk warnings they report. To guide that process, we perform a large-scale measurement study of common TLS warnings. Using a set of passive network monitors located at different sites, we identify the prevalence of warnings for a total population of about 300,000 users over a nine-month period. We identify low-risk scenarios that consume a large chunk of the user attention budget and make concrete recommendations to browser vendors that will help maintain user attention in high-risk situations. We study the impact on end users with a data set much larger in scale than the data sets used in previous TLS measurement studies. A key novelty of our approach involves the use of internal browser code instead of generic TLS libraries for analysis, providing more accurate and representative results. <s> BIB003 </s> A Survey on HTTPS Implementation by Android Apps: Issues and Countermeasures <s> Issues in the HTTPS protocol <s> TLS was designed as a transparent channel abstraction to allow developers with no cryptographic expertise to protect their application against attackers that may control some clients, some servers, and may have the capability to tamper with network connections. However, the security guarantees of TLS fall short of those of a secure channel, leading to a variety of attacks. We show how some widespread false beliefs about these guarantees can be exploited to attack popular applications and defeat several standard authentication methods that rely too naively on TLS. We present new client impersonation attacks against TLS renegotiations, wireless networks, challenge-response protocols, and channel-bound cookies. Our attacks exploit combinations of RSA and Diffie-Hellman key exchange, session resumption, and renegotiation to bypass many recent countermeasures. We also demonstrate new ways to exploit known weaknesses of HTTP over TLS. We investigate the root causes for these attacks and propose new countermeasures. At the protocol level, we design and implement two new TLS extensions that strengthen the authentication guarantees of the handshake. At the application level, we develop an exemplary HTTPS client library that implements several mitigations, on top of a previously verified TLS implementation, and verify that their composition provides strong, simple application security. <s> BIB004 </s> A Survey on HTTPS Implementation by Android Apps: Issues and Countermeasures <s> Issues in the HTTPS protocol <s> Content Delivery Network (CDN) and Hypertext Transfer Protocol Secure (HTTPS) are two popular but independent web technologies, each of which has been well studied individually and independently. This paper provides a systematic study on how these two work together. We examined 20 popular CDN providers and 10,721 of their customer web sites using HTTPS. Our study reveals various problems with the current HTTPS practice adopted by CDN providers, such as widespread use of invalid certificates, private key sharing, neglected revocation of stale certificates, and insecure back-end communication. While some of those problems are operational issues only, others are rooted in the fundamental semantic conflict between the end-to-end nature of HTTPS and the man-in-the-middle nature of CDN involving multiple parties in a delegated service. To address the delegation problem when HTTPS meets CDN, we proposed and implemented a lightweight solution based on DANE (DNS-based Authentication of Named Entities), an emerging IETF protocol complementing the current Web PKI model. Our implementation demonstrates that it is feasible for HTTPS to work with CDN securely and efficiently. This paper intends to provide a context for future discussion within security and CDN community on more preferable solutions. <s> BIB005 </s> A Survey on HTTPS Implementation by Android Apps: Issues and Countermeasures <s> Issues in the HTTPS protocol <s> Modern network security rests on the Secure Sockets Layer (SSL) and Transport Layer Security (TLS) protocols. Distributed systems, mobile and desktop applications, embedded devices, and all of secure Web rely on SSL/TLS for protection against network attacks. This protection critically depends on whether SSL/TLS clients correctly validate X.509 certificates presented by servers during the SSL/TLS handshake protocol. We design, implement, and apply the first methodology for large-scale testing of certificate validation logic in SSL/TLS implementations. Our first ingredient is "frankencerts," synthetic certificates that are randomly mutated from parts of real certificates and thus include unusual combinations of extensions and constraints. Our second ingredient is differential testing: if one SSL/TLS implementation accepts a certificate while another rejects the same certificate, we use the discrepancy as an oracle for finding flaws in individual implementations. Differential testing with frankencerts uncovered 208 discrepancies between popular SSL/TLS implementations such as OpenSSL, NSS, CyaSSL, GnuTLS, PolarSSL, MatrixSSL, etc. Many of them are caused by serious security vulnerabilities. For example, any server with a valid X.509 version1 certificate can act as a rogue certificate authority and issue fake certificates for any domain, enabling man-in-the-middle attacks against MatrixSSL and GnuTLS. Several implementations also accept certificate authorities created by unauthorized issuers, as well as certificates not intended for server authentication. We also found serious vulnerabilities in how users are warned about certificate validation errors. When presented with an expired, self-signed certificate, NSS, Safari, and Chrome (on Linux) report that the certificate has expired - a low-risk, often ignored error - but not that the connection is insecure against a man-in-the-middle attack. These results demonstrate that automated adversarial testing with frankencerts is a powerful methodology for discovering security flaws in SSL/TLS implementations. <s> BIB006 </s> A Survey on HTTPS Implementation by Android Apps: Issues and Countermeasures <s> Issues in the HTTPS protocol <s> A properly managed public key infrastructure (PKI) is critical to ensure secure communication on the Internet. Surprisingly, some of the most important administrative steps---in particular, reissuing new X.509 certificates and revoking old ones---are manual and remained unstudied, largely because it is difficult to measure these manual processes at scale. ::: ::: We use Heartbleed, a widespread OpenSSL vulnerability from 2014, as a natural experiment to determine whether administrators are properly managing their certificates. All domains affected by Heartbleed should have patched their software, revoked their old (possibly compromised) certificates, and reissued new ones, all as quickly as possible. We find the reality to be far from the ideal: over 73% of vulnerable certificates were not reissued and over 87% were not revoked three weeks after Heartbleed was disclosed. Our results also show a drastic decline in revocations on the weekends, even immediately following the Heartbleed announcement. These results are an important step in understanding the manual processes on which users rely for secure, authenticated communication. <s> BIB007
|
Below the lowest-level SSL libraries, the TLS/x.509 protocols are set. Even in this foundation of the HTTPS world, there are flaws. As Fahl et al. express, the SSL/TLS protocol isn't forceful enough BIB002 . Validation checks are not a central part of the SSL/TLS and X.509 standards . Recommendations are given in these IETF papers, but the actual implementation is left to the application developer. IETF RFC 2818, Section 3.1 states that if the client has external information as to the expected identity of the server, the hostname check may be omitted. Both OpenSSL and cURL have issues with the proper implementation of certificate validation. This leniency in the protocol shifts focus away from a vital part of SSL security. While understandable for developing applications with a limited budget, this guiding document of HTTPS must be more definite on the vital subject of hostname verification especially in production applications. Beyond its weak enforcement of certificate validation, there are several issues with the TLS protocol that leave it vulnerable. A couple issues mentioned in a study by Bhargavan et al. BIB004 are that the protocol allows cipher suites which have been ruled unsafe and allows reused identities on resumption of sessions which can potentially break through the process of the TLS handshake by bypassing it. As security researchers continue to look into the TLS protocol and its shortcomings, more hardening mechanisms will be determined. Server-side policies have been presented by the IETF to curb the use of HTTP instead of HTTPS such as HTTP Strict Transport Security (HSTS) . Described in Fig. 5 , they allow for a server to always redirect traffic to HTTPS through the use of an additional header. This effectively counteracts Moxie Marlinspike's SSLSniff [42] . Consumers must manually override occurrences of failed HSTS in their browser. However, these fixes are not as effective as client-side HTTPS Everywhere [63] which similarly forces the server to provide the HTTPS version of the service. Unfortunately, no such implementation of HTTPS Everywhere for communication libraries exists at the time this paper was written. The promise of client certification validity can Figure 5 Overview of the role of HSTS . double the prevention efforts of man-in-the-middle attacks. As HTTPS becomes more of a universal standard, it is clear that communication libraries must offer an HTTPS-only style method to protect user data from plaintext servers and HTTPS stripping attacks. Another critical protocol in SSL security, X.509 , is extremely general and flexible. It has too many complexity related security features of which few are used. Parsing X.509 certificates isn't simple . Indeed, it also bears the pressure of new technologies for which it has no solution. The X.509 Protocol leaves no room for CDNs which have become ubiquitous on the Internet BIB005 . Like the TLS protocol, the X.509 certificate protocol needs an update which narrows down its reach, provides rigidity and standard security mechanisms, and is able to adapt to changes in the make-up of Internet routing infrastructure. In the current SSL environment, system operators and developers are effected by the corruption of certificate authorities. As shown in the work of Amann et al. BIB006 and Bates et al. , the entire CA system is convoluted, unreliable, and overflowing with too many CAs . When web certificates rely on the authenticity of a CA's web of trust, this web should be as small as possible. Several high profile cases of rogue certificate issuance in recent years have raised questions about the security of these trusted servers BIB001 . Nearly every major CA has had a large leak of some sort. Certificate authorities can be socially engineered into surrendering certs to malicious actors. Their systems can be compromised or incorrectly configured and the system of CAs is liable to fall into capitalistic tendencies which may not be conducive for healthy certificate validation. Alternatives have been presented such as Convergence [71] which would replace certificate authorities with notaries which would ping destination servers to verify the validity of the desired path. Further discussion of the future validation methodologies for SSL is beyond the purview of this paper. While the CA infrastructure has dire consequences for SSL security, it isn't necessarily applicable to developer misuse of SSL. Along with the risky manual installation process which server administrators must carry out, the primary flaw in the current CA system which effects developers is the process of simply getting a certificate. Nearly all certificate authorities require a fee to receive a certificate BIB003 . This is not conducive to the widespread acceptance of HTTPS. Again, as noted by Zhang et al. following the fallout of Heartbleed BIB007 , a majority of system administrators failed to revoke the certificates which had been reissued. This leaves these vulnerable certificates out in the wild to be used against hosts. The certificate authority industry needs to adopt standards which aid the easy, free access to certificates and a simple installation process. The security community must work with system operators to build a more intuitive revocation process. In the best case scenario, the introduction of a simpler method of certificate deployment and reset would secure the systems of critical Internet applications. Complications and vulnerabilities at the protocol level drip down to the designs of libraries and applications. In order to assure developers properly use SSL, the system must be no-more difficult to implement than a standard HTTP call.
|
A Survey on HTTPS Implementation by Android Apps: Issues and Countermeasures <s> Need for consumer awareness <s> Android's permission system is intended to inform users about the risks of installing applications. When a user installs an application, he or she has the opportunity to review the application's permission requests and cancel the installation if the permissions are excessive or objectionable. We examine whether the Android permission system is effective at warning users. In particular, we evaluate whether Android users pay attention to, understand, and act on permission information during installation. We performed two usability studies: an Internet survey of 308 Android users, and a laboratory study wherein we interviewed and observed 25 Android users. Study participants displayed low attention and comprehension rates: both the Internet survey and laboratory study found that 17% of participants paid attention to permissions during installation, and only 3% of Internet survey respondents could correctly answer all three permission comprehension questions. This indicates that current Android permission warnings do not help most users make correct security decisions. However, a notable minority of users demonstrated both awareness of permission warnings and reasonable rates of comprehension. We present recommendations for improving user attention and comprehension, as well as identify open challenges. <s> BIB001 </s> A Survey on HTTPS Implementation by Android Apps: Issues and Countermeasures <s> Need for consumer awareness <s> Many Android apps have a legitimate need to communicate over the Internet and are then responsible for protecting potentially sensitive data during transit. This paper seeks to better understand the potential security threats posed by benign Android apps that use the SSL/TLS protocols to protect data they transmit. Since the lack of visual security indicators for SSL/TLS usage and the inadequate use of SSL/TLS can be exploited to launch Man-in-the-Middle (MITM) attacks, an analysis of 13,500 popular free apps downloaded from Google's Play Market is presented. We introduce MalloDroid, a tool to detect potential vulnerability against MITM attacks. Our analysis revealed that 1,074 (8.0%) of the apps examined contain SSL/TLS code that is potentially vulnerable to MITM attacks. Various forms of SSL/TLS misuse were discovered during a further manual audit of 100 selected apps that allowed us to successfully launch MITM attacks against 41 apps and gather a large variety of sensitive data. Furthermore, an online survey was conducted to evaluate users' perceptions of certificate warnings and HTTPS visual security indicators in Android's browser, showing that half of the 754 participating users were not able to correctly judge whether their browser session was protected by SSL/TLS or not. We conclude by considering the implications of these findings and discuss several countermeasures with which these problems could be alleviated. <s> BIB002 </s> A Survey on HTTPS Implementation by Android Apps: Issues and Countermeasures <s> Need for consumer awareness <s> The Secure Sockets Layer (SSL) is widely used to secure data transfers on the Internet. Previous studies have shown that the state of non-browser SSL code is catastrophic across a large variety of desktop applications and libraries as well as a large selection of Android apps, leaving users vulnerable to Man-in-the-Middle attacks (MITMAs). To determine possible causes of SSL problems on all major appified platforms, we extended the analysis to the walled-garden ecosystem of iOS, analyzed software developer forums and conducted interviews with developers of vulnerable apps. Our results show that the root causes are not simply careless developers, but also limitations and issues of the current SSL development paradigm. Based on our findings, we derive a proposal to rethink the handling of SSL in the appified world and present a set of countermeasures to improve the handling of SSL using Android as a blueprint for other platforms. Our countermeasures prevent developers from willfully or accidentally breaking SSL certificate validation, offer support for extended features such as SSL Pinning and different SSL validation infrastructures, and protect users. We evaluated our solution against 13,500 popular Android apps and conducted developer interviews to judge the acceptance of our approach and found that our solution works well for all investigated apps and developers. <s> BIB003 </s> A Survey on HTTPS Implementation by Android Apps: Issues and Countermeasures <s> Need for consumer awareness <s> When browsers report TLS errors, they cannot distinguish between attacks and harmless server misconfigurations; hence they leave it to the user to decide whether continuing is safe. However, actual attacks remain rare. As a result, users quickly become used to "false positives" that deplete their attention span, making it unlikely that they will pay sufficient scrutiny when a real attack comes along. Consequently, browser vendors should aim to minimize the number of low-risk warnings they report. To guide that process, we perform a large-scale measurement study of common TLS warnings. Using a set of passive network monitors located at different sites, we identify the prevalence of warnings for a total population of about 300,000 users over a nine-month period. We identify low-risk scenarios that consume a large chunk of the user attention budget and make concrete recommendations to browser vendors that will help maintain user attention in high-risk situations. We study the impact on end users with a data set much larger in scale than the data sets used in previous TLS measurement studies. A key novelty of our approach involves the use of internal browser code instead of generic TLS libraries for analysis, providing more accurate and representative results. <s> BIB004 </s> A Survey on HTTPS Implementation by Android Apps: Issues and Countermeasures <s> Need for consumer awareness <s> Web browsers show HTTPS authentication warnings (i.e., SSL warnings) when the integrity and confidentiality of users' interactions with websites are at risk. Our goal in this work is to decrease the number of users who click through the Google Chrome SSL warning. Prior research showed that the Mozilla Firefox SSL warning has a much lower click-through rate (CTR) than Chrome. We investigate several factors that could be responsible: the use of imagery, extra steps before the user can proceed, and style choices. To test these factors, we ran six experimental SSL warnings in Google Chrome 29 and measured 130,754 impressions. <s> BIB005 </s> A Survey on HTTPS Implementation by Android Apps: Issues and Countermeasures <s> Need for consumer awareness <s> Modern network security rests on the Secure Sockets Layer (SSL) and Transport Layer Security (TLS) protocols. Distributed systems, mobile and desktop applications, embedded devices, and all of secure Web rely on SSL/TLS for protection against network attacks. This protection critically depends on whether SSL/TLS clients correctly validate X.509 certificates presented by servers during the SSL/TLS handshake protocol. We design, implement, and apply the first methodology for large-scale testing of certificate validation logic in SSL/TLS implementations. Our first ingredient is "frankencerts," synthetic certificates that are randomly mutated from parts of real certificates and thus include unusual combinations of extensions and constraints. Our second ingredient is differential testing: if one SSL/TLS implementation accepts a certificate while another rejects the same certificate, we use the discrepancy as an oracle for finding flaws in individual implementations. Differential testing with frankencerts uncovered 208 discrepancies between popular SSL/TLS implementations such as OpenSSL, NSS, CyaSSL, GnuTLS, PolarSSL, MatrixSSL, etc. Many of them are caused by serious security vulnerabilities. For example, any server with a valid X.509 version1 certificate can act as a rogue certificate authority and issue fake certificates for any domain, enabling man-in-the-middle attacks against MatrixSSL and GnuTLS. Several implementations also accept certificate authorities created by unauthorized issuers, as well as certificates not intended for server authentication. We also found serious vulnerabilities in how users are warned about certificate validation errors. When presented with an expired, self-signed certificate, NSS, Safari, and Chrome (on Linux) report that the certificate has expired - a low-risk, often ignored error - but not that the connection is insecure against a man-in-the-middle attack. These results demonstrate that automated adversarial testing with frankencerts is a powerful methodology for discovering security flaws in SSL/TLS implementations. <s> BIB006
|
Finally, among all of the guilty parties of HTTPS vulnerabilities, the most under-noted one is the end user. As the work of Felt et al. has shown with regards to web browsers, users frequently disregard warnings about SSL/TLS BIB005 . This issue is just as pressing on the Android platform. Another paper by Felt BIB001 explores Android user attention when shown messages on application permissions. Granting applications permissions is a critical responsibility which users must bear in order to protect their privacy and security. Unfortunately, when 308 users were interviewed, only 17% of participants paid attention to permissions during installation and only 3% of respondents correctly answered all three permission comprehension questions. What this means is that the populace views their applications as black boxes. Few users are critical of their applications enough to even read the warnings. This is worrisome since the pace of change is set by consumers. Unless end-users desire security, it will not be implemented in a widespread manner. When investigating user comprehension of HTTPS on Android, the numbers are equally bleak. An online survey by Fahl et al. shows that half of the 700 users questioned could not determine if they were using HTTP or HTTPS BIB002 . Many of the participants failed to read the entire warning message. Participants were mostly college-aged and included students majoring in IT-related and non-IT-related fields. Results showed that even this age-and-major group didn't have a sufficient understanding of data security. Despite the difficulty which comes with informing users of the risks of insecure TLS connections, it seems imperative that no group can push developers to properly implement HTTPS more than their end users. One subset of the lack of user comprehension is that Android does not offer any default warning for SSL errors. This forces developers to provide one for themselves if they wish to inform users about failed certificate validations BIB003 . Furthermore, error reporting in libraries and browsers is broken BIB006 . In a study by Brubaker et al. found that during an investigation of major browsers, many only reported one error even if there were more. It can be presumed that Android applications which have much less oversight than these browsers have even worse error reporting. Furthermore, the messages produced by libraries aren't always human readable and the application frequently does little to clarify the message. This leads to an uninformed end-user who clicks through warnings that seem unimportant. One notable mistake which hinders the reputability of SSL errors is the high number of false flags. A study conducted by Akhawe et al. BIB004 found that when analyzing SSL errors on a mass-scale that 1.54% were false warnings due to misreading the messages from the SSL API. In order for SSL to secure Internet communications, the end-user must remain vigilant of the state of their connection's integrity and the state of their application's security. Developers must work to insure that users are informed and able to check the security of the application itself. In the next section, we will present a list of solutions proposed by researchers and industry leaders and their ideas on combating broken SSL channels.
|
A Survey on HTTPS Implementation by Android Apps: Issues and Countermeasures <s> Link SSL to DEBUGGABLE flag in the Android manifest <s> Many Android apps have a legitimate need to communicate over the Internet and are then responsible for protecting potentially sensitive data during transit. This paper seeks to better understand the potential security threats posed by benign Android apps that use the SSL/TLS protocols to protect data they transmit. Since the lack of visual security indicators for SSL/TLS usage and the inadequate use of SSL/TLS can be exploited to launch Man-in-the-Middle (MITM) attacks, an analysis of 13,500 popular free apps downloaded from Google's Play Market is presented. We introduce MalloDroid, a tool to detect potential vulnerability against MITM attacks. Our analysis revealed that 1,074 (8.0%) of the apps examined contain SSL/TLS code that is potentially vulnerable to MITM attacks. Various forms of SSL/TLS misuse were discovered during a further manual audit of 100 selected apps that allowed us to successfully launch MITM attacks against 41 apps and gather a large variety of sensitive data. Furthermore, an online survey was conducted to evaluate users' perceptions of certificate warnings and HTTPS visual security indicators in Android's browser, showing that half of the 754 participating users were not able to correctly judge whether their browser session was protected by SSL/TLS or not. We conclude by considering the implications of these findings and discuss several countermeasures with which these problems could be alleviated. <s> BIB001 </s> A Survey on HTTPS Implementation by Android Apps: Issues and Countermeasures <s> Link SSL to DEBUGGABLE flag in the Android manifest <s> SSL (Secure Sockets Layer) is the de facto standard for secure Internet communications. Security of SSL connections against an active network attacker depends on correctly validating public-key certificates presented when the connection is established. We demonstrate that SSL certificate validation is completely broken in many security-critical applications and libraries. Vulnerable software includes Amazon's EC2 Java library and all cloud clients based on it; Amazon's and PayPal's merchant SDKs responsible for transmitting payment details from e-commerce sites to payment gateways; integrated shopping carts such as osCommerce, ZenCart, Ubercart, and PrestaShop; AdMob code used by mobile websites; Chase mobile banking and several other Android apps and libraries; Java Web-services middleware including Apache Axis, Axis 2, Codehaus XFire, and Pusher library for Android and all applications employing this middleware. Any SSL connection from any of these programs is insecure against a man-in-the-middle attack. The root causes of these vulnerabilities are badly designed APIs of SSL implementations (such as JSSE, OpenSSL, and GnuTLS) and data-transport libraries (such as cURL) which present developers with a confusing array of settings and options. We analyze perils and pitfalls of SSL certificate validation in software based on these APIs and present our recommendations. <s> BIB002
|
Based on research by Fahl et al. BIB001 and Georgiev et al. BIB002 , Tendulkar et al. suggest changes in the Android manifest to increase secure development practices. A single link between the DEBUGGABLE flag in the Android manifest and SSL/ TLS verification would allow developers to build their apps using mock certificates while still eventually forcing them to make their SSL/TLS connections functional in production. Certificate checks would have to be intact if debugging was off, but the app could accept self-signed certificates if it was on. Applications submitted to the Market with the DEBUG-GABLE flag would be rejected. This solution would directly counteract many of the issues with developers forgetting their debug code in their applications. This is a simple solution that should be implemented in the Android manifest. It is the most direct and effective solution to allow Android developers to write safe SSL communications enumerated in this paper. While it may require some coordination between the Android Market maintainers and developers who already have their applications live, the change would create a sensible workflow for all developers to follow.
|
A Survey on HTTPS Implementation by Android Apps: Issues and Countermeasures <s> Android market and client side application validation <s> This paper focuses on the threat of packet sniffing in a switched environment, and briefly explores the effect in a non-switched environment. Detail is given on a number of techniques, such as "ARP (Address Resolution Protocol) spoofing", which can allow an attacker to eavesdrop on network traffic in a switched environment. Third party tools exist that permit sniffing on a switched network. The result of running some of these tools on an isolated, switched network is presented, and clearly demonstrates that the threat they pose is real and significant. <s> BIB001 </s> A Survey on HTTPS Implementation by Android Apps: Issues and Countermeasures <s> Android market and client side application validation <s> Users have begun downloading an increasingly large number of mobile phone applications in response to advancements in handsets and wireless networks. The increased number of applications results in a greater chance of installing Trojans and similar malware. In this paper, we propose the Kirin security service for Android, which performs lightweight certification of applications to mitigate malware at install time. Kirin certification uses security rules, which are templates designed to conservatively match undesirable properties in security configuration bundled with applications. We use a variant of security requirements engineering techniques to perform an in-depth security analysis of Android to produce a set of rules that match malware characteristics. In a sample of 311 of the most popular applications downloaded from the official Android Market, Kirin and our rules found 5 applications that implement dangerous functionality and therefore should be installed with extreme caution. Upon close inspection, another five applications asserted dangerous rights, but were within the scope of reasonable functional needs. These results indicate that security configuration bundled with Android applications provides practical means of detecting malware. <s> BIB002 </s> A Survey on HTTPS Implementation by Android Apps: Issues and Countermeasures <s> Android market and client side application validation <s> Many Android apps have a legitimate need to communicate over the Internet and are then responsible for protecting potentially sensitive data during transit. This paper seeks to better understand the potential security threats posed by benign Android apps that use the SSL/TLS protocols to protect data they transmit. Since the lack of visual security indicators for SSL/TLS usage and the inadequate use of SSL/TLS can be exploited to launch Man-in-the-Middle (MITM) attacks, an analysis of 13,500 popular free apps downloaded from Google's Play Market is presented. We introduce MalloDroid, a tool to detect potential vulnerability against MITM attacks. Our analysis revealed that 1,074 (8.0%) of the apps examined contain SSL/TLS code that is potentially vulnerable to MITM attacks. Various forms of SSL/TLS misuse were discovered during a further manual audit of 100 selected apps that allowed us to successfully launch MITM attacks against 41 apps and gather a large variety of sensitive data. Furthermore, an online survey was conducted to evaluate users' perceptions of certificate warnings and HTTPS visual security indicators in Android's browser, showing that half of the 754 participating users were not able to correctly judge whether their browser session was protected by SSL/TLS or not. We conclude by considering the implications of these findings and discuss several countermeasures with which these problems could be alleviated. <s> BIB003 </s> A Survey on HTTPS Implementation by Android Apps: Issues and Countermeasures <s> Android market and client side application validation <s> SSL (Secure Sockets Layer) is the de facto standard for secure Internet communications. Security of SSL connections against an active network attacker depends on correctly validating public-key certificates presented when the connection is established. We demonstrate that SSL certificate validation is completely broken in many security-critical applications and libraries. Vulnerable software includes Amazon's EC2 Java library and all cloud clients based on it; Amazon's and PayPal's merchant SDKs responsible for transmitting payment details from e-commerce sites to payment gateways; integrated shopping carts such as osCommerce, ZenCart, Ubercart, and PrestaShop; AdMob code used by mobile websites; Chase mobile banking and several other Android apps and libraries; Java Web-services middleware including Apache Axis, Axis 2, Codehaus XFire, and Pusher library for Android and all applications employing this middleware. Any SSL connection from any of these programs is insecure against a man-in-the-middle attack. The root causes of these vulnerabilities are badly designed APIs of SSL implementations (such as JSSE, OpenSSL, and GnuTLS) and data-transport libraries (such as cURL) which present developers with a confusing array of settings and options. We analyze perils and pitfalls of SSL certificate validation in software based on these APIs and present our recommendations. <s> BIB004 </s> A Survey on HTTPS Implementation by Android Apps: Issues and Countermeasures <s> Android market and client side application validation <s> Modern network security rests on the Secure Sockets Layer (SSL) and Transport Layer Security (TLS) protocols. Distributed systems, mobile and desktop applications, embedded devices, and all of secure Web rely on SSL/TLS for protection against network attacks. This protection critically depends on whether SSL/TLS clients correctly validate X.509 certificates presented by servers during the SSL/TLS handshake protocol. We design, implement, and apply the first methodology for large-scale testing of certificate validation logic in SSL/TLS implementations. Our first ingredient is "frankencerts," synthetic certificates that are randomly mutated from parts of real certificates and thus include unusual combinations of extensions and constraints. Our second ingredient is differential testing: if one SSL/TLS implementation accepts a certificate while another rejects the same certificate, we use the discrepancy as an oracle for finding flaws in individual implementations. Differential testing with frankencerts uncovered 208 discrepancies between popular SSL/TLS implementations such as OpenSSL, NSS, CyaSSL, GnuTLS, PolarSSL, MatrixSSL, etc. Many of them are caused by serious security vulnerabilities. For example, any server with a valid X.509 version1 certificate can act as a rogue certificate authority and issue fake certificates for any domain, enabling man-in-the-middle attacks against MatrixSSL and GnuTLS. Several implementations also accept certificate authorities created by unauthorized issuers, as well as certificates not intended for server authentication. We also found serious vulnerabilities in how users are warned about certificate validation errors. When presented with an expired, self-signed certificate, NSS, Safari, and Chrome (on Linux) report that the certificate has expired - a low-risk, often ignored error - but not that the connection is insecure against a man-in-the-middle attack. These results demonstrate that automated adversarial testing with frankencerts is a powerful methodology for discovering security flaws in SSL/TLS implementations. <s> BIB005
|
Applications submitted to the Android market could be required to undergo scrutiny by MalloDroid BIB003 or the automated fuzzing framework noted in recent work by Malek et al. BIB001 . Fuzzing, also talked about in other SSL testing research BIB004 , would test the number of accepted certificates from randomly generated data in a way like Frankencerts BIB005 . Applications with AllowAllHostnameVerifier [57] would be flagged. Another solution proposed by Enck et al. suggests Kirin BIB002 as a service within individual devices which would check applications for dangerous permissions and malicious code. This could be refitted into a service which also verifies that applications downloaded properly use HTTPS, flagging applications that use unsafe certificate verification methods or custom root stores. This device-based solution would not only protect a phone from the Android Market, but also apps in opensource repositories like F-Droid [76] . This solution would not prevent developers from writing non-HTTPS code, but would stop these applications from reaching production markets. Difficulties of implementing this include the setup and oversight required by market operators and the added restrictions placed on applications which may not deal in sensitive data.
|
A Survey on HTTPS Implementation by Android Apps: Issues and Countermeasures <s> Begin persistent Internet-wide SSL vulnerability scanning <s> Over the years, SSL/TLS has become an essential part of internet security. As such, it should offer robust and state-of-the-art security, in particular for HTTPS, its first application. Theoretically, the protocol allows for a trade-off between secure algorithms and decent performance. Yet in practice, servers do not always support the latest version of the protocol, nor do they all enforce strong cryptographic algorithms. To assess the quality of HTTPS servers in the wild, we enumerated HTTPS servers on the internet in July 2010 and July 2011. We sent several stimuli to the servers to gather detailed information. We then analysed some parameters of the collected data and looked at how they evolved. We also focused on two subsets of TLS hosts within our measure: the trusted hosts (possessing a valid certificate at the time of the probing) and the EV hosts (presenting a trusted, so-called Extended Validation certificate). Our contributions rely on this methodology: the stimuli we sent, the criteria we studied and the subsets we focused on. Moreover, even if EV servers present a somewhat improved certificate quality over the TLS hosts, we show they do not offer overall high quality sessions, which could and should be improved. <s> BIB001 </s> A Survey on HTTPS Implementation by Android Apps: Issues and Countermeasures <s> Begin persistent Internet-wide SSL vulnerability scanning <s> The Heartbleed vulnerability took the Internet by surprise in April 2014. The vulnerability, one of the most consequential since the advent of the commercial Internet, allowed attackers to remotely read protected memory from an estimated 24--55% of popular HTTPS sites. In this work, we perform a comprehensive, measurement-based analysis of the vulnerability's impact, including (1) tracking the vulnerable population, (2) monitoring patching behavior over time, (3) assessing the impact on the HTTPS certificate ecosystem, and (4) exposing real attacks that attempted to exploit the bug. Furthermore, we conduct a large-scale vulnerability notification experiment involving 150,000 hosts and observe a nearly 50% increase in patching by notified hosts. Drawing upon these analyses, we discuss what went well and what went poorly, in an effort to understand how the technical community can respond more effectively to such events in the future. <s> BIB002 </s> A Survey on HTTPS Implementation by Android Apps: Issues and Countermeasures <s> Begin persistent Internet-wide SSL vulnerability scanning <s> In this paper, we propose stochastic fingerprints for application traffic flows conveyed in Secure Socket Layer/Transport Layer Security (SSL/TLS) sessions. The fin- gerprints are based on first-order homogeneous Markov chains for which we identify the parameters from observed training application traces. As the fingerprint parameters of chosen applications considerably differ, the method results in a very good accuracy of application discrimination and provides a possibility of detecting abnormal SSL/TLS sessions. Our analysis of the results reveals that obtaining application discrimination mainly comes from incorrect implementation practice, the misuse of the SSL/TLS protocol, various server configurations, and the application nature. fingerprints of sessions to classify application traffic. We call a fingerprint any distinctive feature allowing identification of a given traffic class. In this work, a fingerprint corresponds to a first-order homogeneous Markov chain reflecting the dynamics of an SSL/TLS session. The Markov chain states model a sequence of SSL/TLS message types appearing in a single direction flow of a given application from a server to a client. We have studied the Markov chain fingerprints for twelve representative applications that make use of SSL/TLS: PayPal (an electronic service allowing online payments and money transfers), Twitter (an online social networking and micro- blogging service), Dropbox (a file hosting service), Gadu- Gadu (a popular Polish instant messenger), Mozilla (a part of Mozilla add-ons service responsible for verification of the software version), MBank and PKO (two popular European online banking services), Dziekanat (student online service), Poczta (student online mail service), Amazon S3 (a Simple Storage Service) and EC2 (an Elastic Compute Cloud), and Skype (a VoIP service). The resulting models exhibit a specific structure allowing to classify encrypted application flows by comparing its message sequences with fingerprints. They can also serve to reveal intrusions trying to exploit the SSL/TLS protocol by establishing abnormal communications with a server. <s> BIB003 </s> A Survey on HTTPS Implementation by Android Apps: Issues and Countermeasures <s> Begin persistent Internet-wide SSL vulnerability scanning <s> A properly managed public key infrastructure (PKI) is critical to ensure secure communication on the Internet. Surprisingly, some of the most important administrative steps---in particular, reissuing new X.509 certificates and revoking old ones---are manual and remained unstudied, largely because it is difficult to measure these manual processes at scale. ::: ::: We use Heartbleed, a widespread OpenSSL vulnerability from 2014, as a natural experiment to determine whether administrators are properly managing their certificates. All domains affected by Heartbleed should have patched their software, revoked their old (possibly compromised) certificates, and reissued new ones, all as quickly as possible. We find the reality to be far from the ideal: over 73% of vulnerable certificates were not reissued and over 87% were not revoked three weeks after Heartbleed was disclosed. Our results also show a drastic decline in revocations on the weekends, even immediately following the Heartbleed announcement. These results are an important step in understanding the manual processes on which users rely for secure, authenticated communication. <s> BIB004
|
Research conducted by the EFF, Durumeric et al. BIB002 , Zhang et al. BIB004 , Levillain et al. BIB001 , and others following Heartbleed have shown that widespread scanning of the Internet for holes in the security of SSL are possible. These scans identify exactly how safe some hosts and servers are and potentially where attacks are originating from. As shown in the work of Durumeric et al. has presented, these scans also allow for researchers to notify server operators whose systems may be vulnerable to attack. This type of notification has proven effective in improving the safety of the Internet. A movement toward Internet-wide vulnerability scanning has positive implications for patching the security of the HTTPS. As new paradigms for SSL analysis are developed BIB003 , the capacity for SSL adherence analysis becomes greater. Yet, the growing number of network flows across the internet will make tracing individual applications and versions more challenging. Any observatory must accurately and transparently identify applications that may be at risk and be able to notify the developers at once.
|
A Survey on HTTPS Implementation by Android Apps: Issues and Countermeasures <s> Large mobile applications should use stronger HTTPS protections <s> The SSL man-in-the-middle attack uses forged SSL certificates to intercept encrypted connections between clients and servers. However, due to a lack of reliable indicators, it is still unclear how commonplace these attacks occur in the wild. In this work, we have designed and implemented a method to detect the occurrence of SSL man-in-the-middle attack on a top global website, Facebook. Over 3 million real-world SSL connections to this website were analyzed. Our results indicate that 0.2% of the SSL connections analyzed were tampered with forged SSL certificates, most of them related to antivirus software and corporate-scale content filters. We have also identified some SSL connections intercepted by malware. Limitations of the method and possible defenses to such attacks are also discussed. <s> BIB001 </s> A Survey on HTTPS Implementation by Android Apps: Issues and Countermeasures <s> Large mobile applications should use stronger HTTPS protections <s> Content Delivery Network (CDN) and Hypertext Transfer Protocol Secure (HTTPS) are two popular but independent web technologies, each of which has been well studied individually and independently. This paper provides a systematic study on how these two work together. We examined 20 popular CDN providers and 10,721 of their customer web sites using HTTPS. Our study reveals various problems with the current HTTPS practice adopted by CDN providers, such as widespread use of invalid certificates, private key sharing, neglected revocation of stale certificates, and insecure back-end communication. While some of those problems are operational issues only, others are rooted in the fundamental semantic conflict between the end-to-end nature of HTTPS and the man-in-the-middle nature of CDN involving multiple parties in a delegated service. To address the delegation problem when HTTPS meets CDN, we proposed and implemented a lightweight solution based on DANE (DNS-based Authentication of Named Entities), an emerging IETF protocol complementing the current Web PKI model. Our implementation demonstrates that it is feasible for HTTPS to work with CDN securely and efficiently. This paper intends to provide a context for future discussion within security and CDN community on more preferable solutions. <s> BIB002 </s> A Survey on HTTPS Implementation by Android Apps: Issues and Countermeasures <s> Large mobile applications should use stronger HTTPS protections <s> In this paper, we propose stochastic fingerprints for application traffic flows conveyed in Secure Socket Layer/Transport Layer Security (SSL/TLS) sessions. The fin- gerprints are based on first-order homogeneous Markov chains for which we identify the parameters from observed training application traces. As the fingerprint parameters of chosen applications considerably differ, the method results in a very good accuracy of application discrimination and provides a possibility of detecting abnormal SSL/TLS sessions. Our analysis of the results reveals that obtaining application discrimination mainly comes from incorrect implementation practice, the misuse of the SSL/TLS protocol, various server configurations, and the application nature. fingerprints of sessions to classify application traffic. We call a fingerprint any distinctive feature allowing identification of a given traffic class. In this work, a fingerprint corresponds to a first-order homogeneous Markov chain reflecting the dynamics of an SSL/TLS session. The Markov chain states model a sequence of SSL/TLS message types appearing in a single direction flow of a given application from a server to a client. We have studied the Markov chain fingerprints for twelve representative applications that make use of SSL/TLS: PayPal (an electronic service allowing online payments and money transfers), Twitter (an online social networking and micro- blogging service), Dropbox (a file hosting service), Gadu- Gadu (a popular Polish instant messenger), Mozilla (a part of Mozilla add-ons service responsible for verification of the software version), MBank and PKO (two popular European online banking services), Dziekanat (student online service), Poczta (student online mail service), Amazon S3 (a Simple Storage Service) and EC2 (an Elastic Compute Cloud), and Skype (a VoIP service). The resulting models exhibit a specific structure allowing to classify encrypted application flows by comparing its message sequences with fingerprints. They can also serve to reveal intrusions trying to exploit the SSL/TLS protocol by establishing abnormal communications with a server. <s> BIB003
|
In a more specific sense, large mobile applications made by Facebook, Amazon, and Google have the ability to verify their own certificates in mobile applications and detect MITM attacks BIB001 . Certain companies which can spare the bandwidth and application space should look into origin bound certs (OBCs) BIB001 and the use of HSTS BIB002 . These fixes are more of a server-side change than anything, but their use can secure popular mobile applications. Depending on the success of these systems, APIs for third-party apps which hook into company servers could also require clients to have certificates. This may prove challenging in scenarios where the application is closed source. However, even black box approaches BIB003 are able to identify certain patterns in SSL traffic that indicate unsafe SSL usage. The technology industry seems to understand the importance of SSL, but implementations of the strongest and arguably the most complex security measures in reality are rarely ideal.
|
A Survey on HTTPS Implementation by Android Apps: Issues and Countermeasures <s> Revise the TLS protocol suite <s> When browsers report TLS errors, they cannot distinguish between attacks and harmless server misconfigurations; hence they leave it to the user to decide whether continuing is safe. However, actual attacks remain rare. As a result, users quickly become used to "false positives" that deplete their attention span, making it unlikely that they will pay sufficient scrutiny when a real attack comes along. Consequently, browser vendors should aim to minimize the number of low-risk warnings they report. To guide that process, we perform a large-scale measurement study of common TLS warnings. Using a set of passive network monitors located at different sites, we identify the prevalence of warnings for a total population of about 300,000 users over a nine-month period. We identify low-risk scenarios that consume a large chunk of the user attention budget and make concrete recommendations to browser vendors that will help maintain user attention in high-risk situations. We study the impact on end users with a data set much larger in scale than the data sets used in previous TLS measurement studies. A key novelty of our approach involves the use of internal browser code instead of generic TLS libraries for analysis, providing more accurate and representative results. <s> BIB001 </s> A Survey on HTTPS Implementation by Android Apps: Issues and Countermeasures <s> Revise the TLS protocol suite <s> Content Delivery Network (CDN) and Hypertext Transfer Protocol Secure (HTTPS) are two popular but independent web technologies, each of which has been well studied individually and independently. This paper provides a systematic study on how these two work together. We examined 20 popular CDN providers and 10,721 of their customer web sites using HTTPS. Our study reveals various problems with the current HTTPS practice adopted by CDN providers, such as widespread use of invalid certificates, private key sharing, neglected revocation of stale certificates, and insecure back-end communication. While some of those problems are operational issues only, others are rooted in the fundamental semantic conflict between the end-to-end nature of HTTPS and the man-in-the-middle nature of CDN involving multiple parties in a delegated service. To address the delegation problem when HTTPS meets CDN, we proposed and implemented a lightweight solution based on DANE (DNS-based Authentication of Named Entities), an emerging IETF protocol complementing the current Web PKI model. Our implementation demonstrates that it is feasible for HTTPS to work with CDN securely and efficiently. This paper intends to provide a context for future discussion within security and CDN community on more preferable solutions. <s> BIB002 </s> A Survey on HTTPS Implementation by Android Apps: Issues and Countermeasures <s> Revise the TLS protocol suite <s> TLS was designed as a transparent channel abstraction to allow developers with no cryptographic expertise to protect their application against attackers that may control some clients, some servers, and may have the capability to tamper with network connections. However, the security guarantees of TLS fall short of those of a secure channel, leading to a variety of attacks. We show how some widespread false beliefs about these guarantees can be exploited to attack popular applications and defeat several standard authentication methods that rely too naively on TLS. We present new client impersonation attacks against TLS renegotiations, wireless networks, challenge-response protocols, and channel-bound cookies. Our attacks exploit combinations of RSA and Diffie-Hellman key exchange, session resumption, and renegotiation to bypass many recent countermeasures. We also demonstrate new ways to exploit known weaknesses of HTTP over TLS. We investigate the root causes for these attacks and propose new countermeasures. At the protocol level, we design and implement two new TLS extensions that strengthen the authentication guarantees of the handshake. At the application level, we develop an exemplary HTTPS client library that implements several mitigations, on top of a previously verified TLS implementation, and verify that their composition provides strong, simple application security. <s> BIB003 </s> A Survey on HTTPS Implementation by Android Apps: Issues and Countermeasures <s> Revise the TLS protocol suite <s> Modern network security rests on the Secure Sockets Layer (SSL) and Transport Layer Security (TLS) protocols. Distributed systems, mobile and desktop applications, embedded devices, and all of secure Web rely on SSL/TLS for protection against network attacks. This protection critically depends on whether SSL/TLS clients correctly validate X.509 certificates presented by servers during the SSL/TLS handshake protocol. We design, implement, and apply the first methodology for large-scale testing of certificate validation logic in SSL/TLS implementations. Our first ingredient is "frankencerts," synthetic certificates that are randomly mutated from parts of real certificates and thus include unusual combinations of extensions and constraints. Our second ingredient is differential testing: if one SSL/TLS implementation accepts a certificate while another rejects the same certificate, we use the discrepancy as an oracle for finding flaws in individual implementations. Differential testing with frankencerts uncovered 208 discrepancies between popular SSL/TLS implementations such as OpenSSL, NSS, CyaSSL, GnuTLS, PolarSSL, MatrixSSL, etc. Many of them are caused by serious security vulnerabilities. For example, any server with a valid X.509 version1 certificate can act as a rogue certificate authority and issue fake certificates for any domain, enabling man-in-the-middle attacks against MatrixSSL and GnuTLS. Several implementations also accept certificate authorities created by unauthorized issuers, as well as certificates not intended for server authentication. We also found serious vulnerabilities in how users are warned about certificate validation errors. When presented with an expired, self-signed certificate, NSS, Safari, and Chrome (on Linux) report that the certificate has expired - a low-risk, often ignored error - but not that the connection is insecure against a man-in-the-middle attack. These results demonstrate that automated adversarial testing with frankencerts is a powerful methodology for discovering security flaws in SSL/TLS implementations. <s> BIB004
|
Beyond issues with developers, the TLS protocol needs to revised to allow for further progress in technological security. IETF RFC 2818, Section 3.1 , which deals with HTTP over TLS, needs to be revised to be more strict on validation guidelines. The protocol must require hostname and certificate validation and the community must adopt the strongest standard possible and implement it correctly. From there, applications which deal with user data can be built. The IETF gives the following recommendations to certificate authorities and client developers: Move away from including and checking strings that look like domain names in the subjects Common Name. Move toward including and checking DNS domain names via the subject AlternativeName extension designed for that purpose: dNSName. Move toward including and checking even more specific subjectAlternativeName extensions where appropriate for using the protocol (e.g., uniformResourceIdentifier and the otherName form SRVName). Move away from the issuance of so-called wildcard certificates (e.g., a certificate containing an identifier for '' * .exam ple.com"). Furthermore, the X.509 needs revision . In order to make way for CDNs, several amendments have been suggested, such as DNS-Based Authentication of Named Entities (DANE) BIB002 . In order to defend the protocol from resumption attacks, the suggestion made by Bhargavan et al. BIB003 is to create a new channel binding that would serve as a unique session hash. Master secrets thus benefit from this nonce. Also, secure resumption indicator which forces connections to check previous sessions is recommended. Stricter name constraints can define exactly who is receiving a certificate and make social engineering more difficult. However, certificate transparency (CT) represents a more promising proposal. It compiles a list of all existing certificates on the Internet. This allows for the public to view and investigate fraudulent certs issued in preparation for MITM attacks. This protocol, of course, relies on interested parties, like the EFF's SSL observatory to pay attention . This could also help CAs determine the validity of cert requests. Combining CT and pinning would greatly increase security BIB004 . The openness of these systems will both spread awareness of SSL security, but hopefully spur further educational materials and human-friendly implementations. One other addendum to the protocol would establish a system that rather than allowing certificates to expire and throw a fatal error immediately, have certs warn the administrator for a week before throwing the error and more relaxed warnings BIB001 . CAs can use more specific revocation lists-some for normal expirations and some for blacklists BIB001 . These two solutions would stop a large number of false positive warnings which undermine the social comprehension of SSL. Finally, Android specifically could benefit from implementing a device-wide web security policy which would guide its specific implementations of SSL to a strong standard. The TLS/X.509 protocol can benefit from dozens of new additions and specifications which will meet the needs of the applications which use it. The main limitation remains what direction would be the most sustainable solution moving forward, garnering both industry and academic support. While these increases in strictness will make the line drawn between secure and insecure architectures clear, it may pose an issue to developers and users who are looking for performance and availability over security. Suggestions for a warning escalation system may alleviate the pressure on developers and system administrators, much innovation and discussion remains before a proper certificate architecture that solves for both security and usability can be put in place.
|
A Survey on HTTPS Implementation by Android Apps: Issues and Countermeasures <s> Increase consumer awareness <s> Many Android apps have a legitimate need to communicate over the Internet and are then responsible for protecting potentially sensitive data during transit. This paper seeks to better understand the potential security threats posed by benign Android apps that use the SSL/TLS protocols to protect data they transmit. Since the lack of visual security indicators for SSL/TLS usage and the inadequate use of SSL/TLS can be exploited to launch Man-in-the-Middle (MITM) attacks, an analysis of 13,500 popular free apps downloaded from Google's Play Market is presented. We introduce MalloDroid, a tool to detect potential vulnerability against MITM attacks. Our analysis revealed that 1,074 (8.0%) of the apps examined contain SSL/TLS code that is potentially vulnerable to MITM attacks. Various forms of SSL/TLS misuse were discovered during a further manual audit of 100 selected apps that allowed us to successfully launch MITM attacks against 41 apps and gather a large variety of sensitive data. Furthermore, an online survey was conducted to evaluate users' perceptions of certificate warnings and HTTPS visual security indicators in Android's browser, showing that half of the 754 participating users were not able to correctly judge whether their browser session was protected by SSL/TLS or not. We conclude by considering the implications of these findings and discuss several countermeasures with which these problems could be alleviated. <s> BIB001 </s> A Survey on HTTPS Implementation by Android Apps: Issues and Countermeasures <s> Increase consumer awareness <s> When browsers report TLS errors, they cannot distinguish between attacks and harmless server misconfigurations; hence they leave it to the user to decide whether continuing is safe. However, actual attacks remain rare. As a result, users quickly become used to "false positives" that deplete their attention span, making it unlikely that they will pay sufficient scrutiny when a real attack comes along. Consequently, browser vendors should aim to minimize the number of low-risk warnings they report. To guide that process, we perform a large-scale measurement study of common TLS warnings. Using a set of passive network monitors located at different sites, we identify the prevalence of warnings for a total population of about 300,000 users over a nine-month period. We identify low-risk scenarios that consume a large chunk of the user attention budget and make concrete recommendations to browser vendors that will help maintain user attention in high-risk situations. We study the impact on end users with a data set much larger in scale than the data sets used in previous TLS measurement studies. A key novelty of our approach involves the use of internal browser code instead of generic TLS libraries for analysis, providing more accurate and representative results. <s> BIB002 </s> A Survey on HTTPS Implementation by Android Apps: Issues and Countermeasures <s> Increase consumer awareness <s> Web browsers show HTTPS authentication warnings (i.e., SSL warnings) when the integrity and confidentiality of users' interactions with websites are at risk. Our goal in this work is to decrease the number of users who click through the Google Chrome SSL warning. Prior research showed that the Mozilla Firefox SSL warning has a much lower click-through rate (CTR) than Chrome. We investigate several factors that could be responsible: the use of imagery, extra steps before the user can proceed, and style choices. To test these factors, we ran six experimental SSL warnings in Google Chrome 29 and measured 130,754 impressions. <s> BIB003
|
Gaining user pressure on developers seems to be possible at this point to in only a few ways. If platform developers were to implement an effective non-HTTPS warning system in Android, the hands of developers would be pushed BIB001 . This would not alert all users, but it would alert those who are security conscious. Going further and preventing users from going to sites with misconfigured SSL/TLS forces developers to fix their authentication issues though it may inconvenience a user BIB003 . Less frequent and more accurate warnings may stop the end user from ignoring the errors since a user will obviously click through messages if they are bombarded with them BIB002 . The end user is most familiar with SSL when it gives them an error. These interactions need to be more meaningful and human-understandable. As with any security education, placing proper importance on SSL/TLS will require concrete examples and explanations about why sites with broken certificates should be avoided almost completely.
|
A Survey on HTTPS Implementation by Android Apps: Issues and Countermeasures <s> Moving forward <s> A large number of software security vulnerabilities are caused by software errors that are committed by software developers. We believe that interactive tool support will play an important role in aiding software developers to develop more secure software. However, an in-depth understanding of how and why software developers produce security bugs is needed to design such tools. We conducted a semi-structured interview study on 15 professional software developers to understand their perceptions and behaviors related to software security. Our results reveal a disconnect between developers' conceptual understanding of security and their attitudes regarding their personal responsibility and practices for software security. <s> BIB001 </s> A Survey on HTTPS Implementation by Android Apps: Issues and Countermeasures <s> Moving forward <s> Many Android apps have a legitimate need to communicate over the Internet and are then responsible for protecting potentially sensitive data during transit. This paper seeks to better understand the potential security threats posed by benign Android apps that use the SSL/TLS protocols to protect data they transmit. Since the lack of visual security indicators for SSL/TLS usage and the inadequate use of SSL/TLS can be exploited to launch Man-in-the-Middle (MITM) attacks, an analysis of 13,500 popular free apps downloaded from Google's Play Market is presented. We introduce MalloDroid, a tool to detect potential vulnerability against MITM attacks. Our analysis revealed that 1,074 (8.0%) of the apps examined contain SSL/TLS code that is potentially vulnerable to MITM attacks. Various forms of SSL/TLS misuse were discovered during a further manual audit of 100 selected apps that allowed us to successfully launch MITM attacks against 41 apps and gather a large variety of sensitive data. Furthermore, an online survey was conducted to evaluate users' perceptions of certificate warnings and HTTPS visual security indicators in Android's browser, showing that half of the 754 participating users were not able to correctly judge whether their browser session was protected by SSL/TLS or not. We conclude by considering the implications of these findings and discuss several countermeasures with which these problems could be alleviated. <s> BIB002 </s> A Survey on HTTPS Implementation by Android Apps: Issues and Countermeasures <s> Moving forward <s> Over the years, SSL/TLS has become an essential part of internet security. As such, it should offer robust and state-of-the-art security, in particular for HTTPS, its first application. Theoretically, the protocol allows for a trade-off between secure algorithms and decent performance. Yet in practice, servers do not always support the latest version of the protocol, nor do they all enforce strong cryptographic algorithms. To assess the quality of HTTPS servers in the wild, we enumerated HTTPS servers on the internet in July 2010 and July 2011. We sent several stimuli to the servers to gather detailed information. We then analysed some parameters of the collected data and looked at how they evolved. We also focused on two subsets of TLS hosts within our measure: the trusted hosts (possessing a valid certificate at the time of the probing) and the EV hosts (presenting a trusted, so-called Extended Validation certificate). Our contributions rely on this methodology: the stimuli we sent, the criteria we studied and the subsets we focused on. Moreover, even if EV servers present a somewhat improved certificate quality over the TLS hosts, we show they do not offer overall high quality sessions, which could and should be improved. <s> BIB003 </s> A Survey on HTTPS Implementation by Android Apps: Issues and Countermeasures <s> Moving forward <s> SSL (Secure Sockets Layer) is the de facto standard for secure Internet communications. Security of SSL connections against an active network attacker depends on correctly validating public-key certificates presented when the connection is established. We demonstrate that SSL certificate validation is completely broken in many security-critical applications and libraries. Vulnerable software includes Amazon's EC2 Java library and all cloud clients based on it; Amazon's and PayPal's merchant SDKs responsible for transmitting payment details from e-commerce sites to payment gateways; integrated shopping carts such as osCommerce, ZenCart, Ubercart, and PrestaShop; AdMob code used by mobile websites; Chase mobile banking and several other Android apps and libraries; Java Web-services middleware including Apache Axis, Axis 2, Codehaus XFire, and Pusher library for Android and all applications employing this middleware. Any SSL connection from any of these programs is insecure against a man-in-the-middle attack. The root causes of these vulnerabilities are badly designed APIs of SSL implementations (such as JSSE, OpenSSL, and GnuTLS) and data-transport libraries (such as cURL) which present developers with a confusing array of settings and options. We analyze perils and pitfalls of SSL certificate validation in software based on these APIs and present our recommendations. <s> BIB004 </s> A Survey on HTTPS Implementation by Android Apps: Issues and Countermeasures <s> Moving forward <s> In this paper, we propose stochastic fingerprints for application traffic flows conveyed in Secure Socket Layer/Transport Layer Security (SSL/TLS) sessions. The fin- gerprints are based on first-order homogeneous Markov chains for which we identify the parameters from observed training application traces. As the fingerprint parameters of chosen applications considerably differ, the method results in a very good accuracy of application discrimination and provides a possibility of detecting abnormal SSL/TLS sessions. Our analysis of the results reveals that obtaining application discrimination mainly comes from incorrect implementation practice, the misuse of the SSL/TLS protocol, various server configurations, and the application nature. fingerprints of sessions to classify application traffic. We call a fingerprint any distinctive feature allowing identification of a given traffic class. In this work, a fingerprint corresponds to a first-order homogeneous Markov chain reflecting the dynamics of an SSL/TLS session. The Markov chain states model a sequence of SSL/TLS message types appearing in a single direction flow of a given application from a server to a client. We have studied the Markov chain fingerprints for twelve representative applications that make use of SSL/TLS: PayPal (an electronic service allowing online payments and money transfers), Twitter (an online social networking and micro- blogging service), Dropbox (a file hosting service), Gadu- Gadu (a popular Polish instant messenger), Mozilla (a part of Mozilla add-ons service responsible for verification of the software version), MBank and PKO (two popular European online banking services), Dziekanat (student online service), Poczta (student online mail service), Amazon S3 (a Simple Storage Service) and EC2 (an Elastic Compute Cloud), and Skype (a VoIP service). The resulting models exhibit a specific structure allowing to classify encrypted application flows by comparing its message sequences with fingerprints. They can also serve to reveal intrusions trying to exploit the SSL/TLS protocol by establishing abnormal communications with a server. <s> BIB005 </s> A Survey on HTTPS Implementation by Android Apps: Issues and Countermeasures <s> Moving forward <s> Content Delivery Network (CDN) and Hypertext Transfer Protocol Secure (HTTPS) are two popular but independent web technologies, each of which has been well studied individually and independently. This paper provides a systematic study on how these two work together. We examined 20 popular CDN providers and 10,721 of their customer web sites using HTTPS. Our study reveals various problems with the current HTTPS practice adopted by CDN providers, such as widespread use of invalid certificates, private key sharing, neglected revocation of stale certificates, and insecure back-end communication. While some of those problems are operational issues only, others are rooted in the fundamental semantic conflict between the end-to-end nature of HTTPS and the man-in-the-middle nature of CDN involving multiple parties in a delegated service. To address the delegation problem when HTTPS meets CDN, we proposed and implemented a lightweight solution based on DANE (DNS-based Authentication of Named Entities), an emerging IETF protocol complementing the current Web PKI model. Our implementation demonstrates that it is feasible for HTTPS to work with CDN securely and efficiently. This paper intends to provide a context for future discussion within security and CDN community on more preferable solutions. <s> BIB006 </s> A Survey on HTTPS Implementation by Android Apps: Issues and Countermeasures <s> Moving forward <s> The Heartbleed vulnerability took the Internet by surprise in April 2014. The vulnerability, one of the most consequential since the advent of the commercial Internet, allowed attackers to remotely read protected memory from an estimated 24--55% of popular HTTPS sites. In this work, we perform a comprehensive, measurement-based analysis of the vulnerability's impact, including (1) tracking the vulnerable population, (2) monitoring patching behavior over time, (3) assessing the impact on the HTTPS certificate ecosystem, and (4) exposing real attacks that attempted to exploit the bug. Furthermore, we conduct a large-scale vulnerability notification experiment involving 150,000 hosts and observe a nearly 50% increase in patching by notified hosts. Drawing upon these analyses, we discuss what went well and what went poorly, in an effort to understand how the technical community can respond more effectively to such events in the future. <s> BIB007 </s> A Survey on HTTPS Implementation by Android Apps: Issues and Countermeasures <s> Moving forward <s> We present SAMPLES: Self Adaptive Mining of Persistent LExical Snippets; a systematic framework for classifying network traffic generated by mobile applications. SAMPLES constructs conjunctive rules, in an automated fashion, through a supervised methodology over a set of labeled flows (the training set). Each conjunctive rule corresponds to the lexical context, associated with an application identifier found in a snippet of the HTTP header, and is defined by: (a) the identifier type, (b) the HTTP header-field it occurs in, and (c) the prefix/suffix surrounding its occurrence. Subsequently, these conjunctive rules undergo an aggregate-and-validate step for improving accuracy and determining a priority order. The refined rule-set is then loaded into an application-identification engine where it operates at a per flow granularity, in an extract-and-lookup paradigm, to identify the application responsible for a given flow. Thus, SAMPLES can facilitate important network measurement and management tasks --- e.g. behavioral profiling [29], application-level firewalls [21,22] etc. --- which require a more detailed view of the underlying traffic than that afforded by traditional protocol/port based methods. We evaluate SAMPLES on a test set comprising 15 million flows (approx.) generated by over 700 K applications from the Android, iOS and Nokia market-places. SAMPLES successfully identifies over 90% of these applications with 99% accuracy on an average. This, in spite of the fact that fewer than 2% of the applications are required during the training phase, for each of the three market places. This is a testament to the universality and the scalability of our approach. We, therefore, expect SAMPLES to work with reasonable coverage and accuracy for other mobile platforms --- e.g. BlackBerry and Windows Mobile --- as well. <s> BIB008 </s> A Survey on HTTPS Implementation by Android Apps: Issues and Countermeasures <s> Moving forward <s> The use of secure HTTP calls is a first and critical step toward securing the Android application data when the app interacts with the Internet. However, one of the major causes for the unencrypted communication is app developer's errors or ignorance. Could the paradigm of literally repetitive and ineffective emphasis shift towards emphasis as a mechanism? This paper introduces emphaSSL, a simple, practical and readily-deployable way to harden networking security in Android applications. Our emphaSSL could guide app developer's security development decisions via real-time feedback, informative warnings and suggestions. At its core of emphaSSL, we use a set of rigorous security rules, which are obtained through an in-depth SSL/TLS security analysis based on security requirements engineering techniques. We implement emphaSSL via the PMD and evaluate it against 75 open- source Android applications. Our results show that emphaSSL is effective at detecting security violations in HTTPS calls with a very low false positive rate, around 2%. Furthermore, we identified 164 substantial SSL mistakes in these testing apps, 40% of which are potentially vulnerable to man-in-the-middle attacks. In each of these instances, the vulnerabilities could be quickly resolved with the assistance of our highlighting messages in emphaSSL. Upon notifying developers of our findings in their applications, we received positive responses and interest in this approach. <s> BIB009
|
The previous sections opens up several ideas for next steps in research to prevent Android developer misuse of HTTPS. In this section, we will present some recommendations for future work. A productive solution to the issue of misinformation and SSL ignorance would be the creation of an online resource which exists as a single, accurate reference for the growing number of Android developers seeking to implement HTTPS in their applications. This solution would work with existing parties such as Android Developer Training and Stack Overflow to present credible and understandable information. A project in this field would include a primer on public key infrastructure, the proper usage of HTTPS, current attacks on SSL, a presentation of the most popular ways of implementing TLS on Android, and directions on how to acquire a server certificate. The presentation would be easy to read and include links to resources for further study and more specific problem solutions. A plugin to an IDE which would provide real-time feedback on the legitimacy of HTTPS calls could be developed in order to point out mistakes to developers. Similar to warnings which arise when using C's vulnerable strcpy, this plugin could then be tested for effectiveness at properly informing developers of their mistakes and the proper way to implement SSL. This plugin would need to return human readable and specific errors. An experimental plugin, emphaSSL BIB009 has been developed pursuant of this idea. The review of 75 open-source applications showed that 40% of the applications had significant violations of TLS protocol. This is concerning given the popularity of some of these applications and the sensitive data they transmit. Further research is needed to determine how effective feedback within the IDE is for developers and how to best present security suggestions during the product creation lifecycle. Fahl et al. BIB002 mention the implementation of their service MalloDroid as part of the Android Market or as a web application. In order to bring the benefits of Mallodroid to end users, an experimental service based off Mallodroid could be developed which would detect applications with vulnerable SSL connections and flag the program operator. This would fit into a model of an Android Market app or an end user device. This experiment would present either of these models with static code checking at its core and predict success rates. Furthermore, the work of Yao et al. BIB008 could be used by market administrators and network watchdogs to identify insecure traffic and identify the applications and specific software versions which are vulnerable. Individual flows could be analyzed for weak ciphers, expired and self-signed certificates, as well as completely plain-text packets. This would give greater oversight on existing network traffic at a more practical and automated level than static code analysis and would open up oversight to closed-source applications. Implementations of traffic fingerprinting and analysis have been conducted BIB005 BIB003 , which give great insights to the way that various Internet services handle SSL overall. As mentioned in the work of Georgiev et al. BIB004 , several open source libraries could benefit from a reworded API and stronger documentation. Another project which could originate from this survey would be an effort to present clear method names and contributions to these open source libraries. This would require strong collaboration with security experts in the community and further research into psychological implications of programming syntax. One of the more specific solutions for Android which could come from work existing on a desktop scale would be the implementation of CDNSEC [82] , a Firefox add-on that demonstrates the DANE protocol, as an Android service. While this would serve a very specific purpose based deeply in the work of Liang et al. BIB006 and not so much on the SSL comprehension for developers, it would be the first step toward adoption of DANE and in extension, forward-thinking SSL security, on multiple platforms. Furthermore, the development of Convergence [71], a CA-free certificate validation system on the Android platform would allow for the promising protocol to expand and test the implications of the overhead on mobile phones. A less technical research project could be conducted in a similar manner to that of Xie et al.'s survey of developers BIB001 , but focus on asking developers what their major challenges were in implementing HTTPS. Following the survey, the experimenters could look into the applications made by these developers to see how the SSL was implemented. Conclusions drawn from this would go into refining documentation, educational materials, and SSL libraries. Furthermore, developers could be presented with a situation which requires an HTTP call in their chosen language. The experimenters would then record the comprehension of the developer, whether or not web resources were used, and how well this implementation would withstand a MITM attack. Again, a device-wide security policy could be proposed or discussed in further research which would encourage Android developers to adopt a standard set of security procedures and set a benchmark for SSL usage. This exists in diverse formats, but the presentation of a unified system would fulfill the call issued by Jeff Hodges and Andy Steingruebl for a web security policy framework , but with a particularly mobile lean. The development of a sustainable Internet-scanning service for security researchers would allow for further research into the shortcomings that still exist within the HTTPS protocol in its current form. This tool would be available to researchers, commercial entities, and security organizations in order to find holes to patch. The outcome would be much like the work of Durumeric et al. following Heartbleed BIB007 , extensive notification of vulnerable entities with the hopes that these systems would be patched quickly. Further solutions will certainly arise for the Android platform as research into new protocols, languages, and programming paradigms continues.
|
Ray geometry in non-pinhole cameras: a survey <s> Introduction <s> A number of techniques have been proposed for flying through scenes by redisplaying previously rendered or digitized views. Techniques have also been proposed for interpolating between views by warping input images, using depth information or correspondences between multiple images. In this paper, we describe a simple and robust method for generating new views from arbitrary camera positions without depth information or feature matching, simply by combining and resampling the available images. The key to this technique lies in interpreting the input images as 2D slices of a 4D function the light field. This function completely characterizes the flow of light through unobstructed space in a static scene with fixed illumination. We describe a sampled representation for light fields that allows for both efficient creation and display of inward and outward looking views. We hav e created light fields from large arrays of both rendered and digitized images. The latter are acquired using a video camera mounted on a computer-controlled gantry. Once a light field has been created, new views may be constructed in real time by extracting slices in appropriate directions. Since the success of the method depends on having a high sample rate, we describe a compression system that is able to compress the light fields we have generated by more than a factor of 100:1 with very little loss of fidelity. We also address the issues of antialiasing during creation, and resampling during slice extraction. CR Categories: I.3.2 [Computer Graphics]: Picture/Image Generation — Digitizing and scanning, Viewing algorithms; I.4.2 [Computer Graphics]: Compression — Approximate methods Additional keywords: image-based rendering, light field, holographic stereogram, vector quantization, epipolar analysis <s> BIB001 </s> Ray geometry in non-pinhole cameras: a survey <s> Introduction <s> From the Publisher: ::: A basic problem in computer vision is to understand the structure of a real world scene given several images of it. Recent major developments in the theory and practice of scene reconstruction are described in detail in a unified framework. The book covers the geometric principles and how to represent objects algebraically so they can be computed and applied. The authors provide comprehensive background material and explain how to apply the methods and implement the algorithms directly. <s> BIB002 </s> Ray geometry in non-pinhole cameras: a survey <s> Introduction <s> Optical systems used in photography and cinema produce depth-of-field effects, that is, variations of focus with depth. These effects are simulated in image synthesis by integrating incoming radiance at each pixel over the lense aperture. Unfortunately, aperture integration is extremely costly for defocused areas where the incoming radiance has high variance, since many samples are then required for a noise-free Monte Carlo integration. On the other hand, using many aperture samples is wasteful in focused areas where the integrand varies little. Similarly, image sampling in defocused areas should be adapted to the very smooth appearance variations due to blurring. This article introduces an analysis of focusing and depth-of-field in the frequency domain, allowing a practical characterization of a light field's frequency content both for image and aperture sampling. Based on this analysis we propose an adaptive depth-of-field rendering algorithm which optimizes sampling in two important ways. First, image sampling is based on conservative bandwidth prediction and a splatting reconstruction technique ensures correct image reconstruction. Second, at each pixel the variance in the radiance over the aperture is estimated and used to govern sampling. This technique is easily integrated in any sampling-based renderer, and vastly improves performance. <s> BIB003
|
A pinhole camera collects rays passing through a common 3D point, which is called the Center-of-Project (CoP). Conceptually, it can be effectively viewed as a light-proof box with a small hole in one side, through which light from a scene passes and projects an inverted image on the opposite side of the box as shown in Fig. 1 . The history of pinhole cameras can be traced back to Mo Jing, a Mohist philosopher in the fifth century BC in China who described a similar design using a closed room and a hole the wall. In the 10th century, Persian scientist Ibn al-Haytham (Alhazen) wrote about naturally occurring rudimentary pinhole cameras. In 1822, Niepce managed to take the first photograph using the pinhole camera obscura via lithography. Today, the pinhole camera is serving as the most common workhorse for general imaging applications. The imaging quality of a pinhole camera relies heavily on choosing the proper sized pinhole: a small pinhole produces a sharp image but the image will be dimmer due to insufficient light whereas a large pinhole generates brighter but blurrier images. To address this issue, lenses have been used for converging lights. The goal is to replace the pure pinhole model with a pinhole-like optical model that can admit more light while maintaining image sharpness. For example, a thin, convex lens can be placed at the pinhole position with a focal length equal to the distance to the film plane in order to take pictures of distant objects. This emulates opening up the pinhole significantly. We refer to this thin lens-based pinhole approximation as pinhole optics. In computer vision and graphics, pinhole cameras are dominating imaging model for two main reasons. First, pinhole geometry is rather simple. Each pinhole camera can be uniquely defined by only three parameters (the position of CoP in 3D). The pinhole imaging process can be decomposed into two parts: projecting the scene geometry into rays Fig. 1 (a) A pinhole camera collects rays passing through a common 3D point (the CoP). (b) An illustration of the pinhole obscura and mapping the rays onto the image plane and they can be uniformly described by the classic 3 × 4 pinhole camera matrix BIB002 . Under homogeneous coordinates, the imaging process is linear. Second, in bright light, the human eyes act as a virtual pinhole camera where the observed images exhibit all characteristics as a pinhole image, e.g., points map to points, lines map to lines, parallel lines converge at a vanishing point, etc. Pinhole cameras are therefore also referred to as perspective cameras in the graphics and vision literature. The pinhole imaging model, however, is rare in insect eyes. Compound eyes, which may consist of thousands of individual photoreceptor units or ommatidia are much more common. The image perceived is a combination of inputs from the numerous ommatidia (individual "eye units"), which are located on a convex surface, thus pointing in slightly different directions. Compound eyes hence possess a very large view angle and greatly help detect fast movement. Notice that rays collected by a compound eye will no longer follow pinhole geometry. Rather, they follow multiviewpoint or multi-perspective imaging geometry. The idea of non-pinhole imaging model has been widely adopted in art: artists, architects, and engineers regularly draw using non-pinhole projections. Despite their incongruity of views, effective non-pinhole images are still able to preserve spatial coherence. Pre-Renaissance and postimpressionist artists frequently use non-pinhole models to depict more than can be seen from any specific view point. For example, the cubism of Picasso and Matisse can depict, within a single context, details of a scene that are simultaneously inaccessible from a single view, yet easily interpretable by a viewer. The goal of this survey is to carry out a comprehensive review on non-pinhole imaging models and their applications in computer graphics and vision. Scope On the theory front, this survey presents a unique approach to systematically study non-pinhole imaging models in the ray space. Specifically, we parameterize rays to a 4D ray space using the Two-Plane Parametrization (2PP) BIB001 BIB003 and then study geometric ray structures of nonpinhole cameras in the ray space. We show that common non-perspective phenomenon such as reflections, refractions and defocus blurs can all be viewed as ray geometry transformations. Further, commonly used non-pinhole cameras can be effectively modeled as special (planar) 2D manifold in the ray space. The ray manifold model also provides feasible solutions for the forward projection problem, i.e., how to find the projection from a 3D point to its corresponding pixel in a non-pinhole imaging system. On the application sides, we showcase a broad range of non-pinhole imaging systems. In computer vision, we discuss state-of-the-art solutions that apply non-pinhole cameras for stereo matching, multi-view reconstruction, shapefrom-distortion, etc. In computational photography, we discuss emerging solutions that use the non-pinhole camera modelings for designing catadioptric cameras and projectors to acquire/project with a much wider Field-of-View (FoV) as well as various light field camera designs to directly acquire the 4D ray space in a single image. In computer graphics, we demonstrate using non-pinhole camera models for generating panoramas, creating cubism styles, rendering caustics, faux-animations from still-life scenes, rendering beyond occlusions, etc. This survey is closely related to recent surveys on multiperspective modeling and rendering and computational photography . Yu et al. provides a general overview of multi-perspective cameras whereas we provide a comprehensive ray-space mathematical model for a broader class of non-pinhole cameras. Raskar et al. focuses mostly on computational photography whereas we discuss the use of conceptual and real non-pinhole cameras for applications in computer vision and computer graphics. Further, our unified ray geometry analysis may fundamentally change people's view on cameras and projectors.
|
Ray geometry in non-pinhole cameras: a survey <s> Ray space <s> A number of techniques have been proposed for flying through scenes by redisplaying previously rendered or digitized views. Techniques have also been proposed for interpolating between views by warping input images, using depth information or correspondences between multiple images. In this paper, we describe a simple and robust method for generating new views from arbitrary camera positions without depth information or feature matching, simply by combining and resampling the available images. The key to this technique lies in interpreting the input images as 2D slices of a 4D function the light field. This function completely characterizes the flow of light through unobstructed space in a static scene with fixed illumination. We describe a sampled representation for light fields that allows for both efficient creation and display of inward and outward looking views. We hav e created light fields from large arrays of both rendered and digitized images. The latter are acquired using a video camera mounted on a computer-controlled gantry. Once a light field has been created, new views may be constructed in real time by extracting slices in appropriate directions. Since the success of the method depends on having a high sample rate, we describe a compression system that is able to compress the light fields we have generated by more than a factor of 100:1 with very little loss of fidelity. We also address the issues of antialiasing during creation, and resampling during slice extraction. CR Categories: I.3.2 [Computer Graphics]: Picture/Image Generation — Digitizing and scanning, Viewing algorithms; I.4.2 [Computer Graphics]: Compression — Approximate methods Additional keywords: image-based rendering, light field, holographic stereogram, vector quantization, epipolar analysis <s> BIB001 </s> Ray geometry in non-pinhole cameras: a survey <s> Ray space <s> We describe an image based rendering approach that generalizes many current image based rendering algorithms, including light field rendering and view-dependent texture mapping. In particular, it allows for lumigraph-style rendering from a set of input cameras in arbitrary configurations (i.e., not restricted to a plane or to any specific manifold). In the case of regular and planar input camera positions, our algorithm reduces to a typical lumigraph approach. When presented with fewer cameras and good approximate geometry, our algorithm behaves like view-dependent texture mapping. The algorithm achieves this flexibility because it is designed to meet a set of specific goals that we describe. We demonstrate this flexibility with a variety of examples. <s> BIB002
|
We use the Two-Plane Parametrization (2PP) that is widely used in light field BIB001 and lumigraph BIB002 for representing rays, as shown in Fig. 2(a) . Under 2PP, a ray in free space is defined by its intersections with two parallel planes (Π uv and Π st ). Usually, Π uv is chosen as the aperture plane (z = 0) whose origin is the origin of the coordinate system. Π st is placed at z = 1 and chosen to be the default image plane. All rays that are not parallel to Π uv and Π st will intersect the two planes at [u, v, 0] and [s, t, 1], respectively, and we use [u, v, s, t] for parameterizing each ray.
|
Ray geometry in non-pinhole cameras: a survey <s> The thin lens operator <s> This paper contributes to the theory of photograph formation from light fields. The main result is a theorem that, in the Fourier domain, a photograph formed by a full lens aperture is a 2D slice in the 4D light field. Photographs focused at different depths correspond to slices at different trajectories in the 4D space. The paper demonstrates the utility of this theorem in two different ways. First, the theorem is used to analyze the performance of digital refocusing, where one computes photographs focused at different depths from a single light field. The analysis shows in closed form that the sharpness of refocused photographs increases linearly with directional resolution. Second, the theorem yields a Fourier-domain algorithm for digital refocusing, where we extract the appropriate 2D slice of the light field's Fourier transform, and perform an inverse 2D Fourier transform. This method is faster than previous approaches. <s> BIB001 </s> Ray geometry in non-pinhole cameras: a survey <s> The thin lens operator <s> Human stereo vision works by fusing a pair of perspective images with a purely horizontal parallax. Recent developments suggest that very few varieties of multiperspective stereo pairs exist. In this paper, we introduce a new stereo model, which we call epsilon stereo pairs, for fusing a broader class of multiperspective images. An epsilon stereo pair consists of two images with a slight vertical parallax. We show many multiperspective camera pairs that do not satisfy the stereo constraint can still form epsilon stereo pairs. We then introduce a new ray-space warping algorithm to minimize stereo inconsistencies in an epsilon pair using multiperspective collineations. This makes epsilon stereo model a promising tool for synthesizing close-to-stereo fusions from many non-stereo pairs. <s> BIB002 </s> Ray geometry in non-pinhole cameras: a survey <s> The thin lens operator <s> We present a new shape-from-distortion framework for recovering specular (reflective/refractive) surfaces. While most existing approaches rely on accurate correspondences between 2D pixels and 3D points, we focus on analyzing the curved images of 3D lines which we call curved line images or CLIs. Our approach models CLIs of local reflections or refractions using the recently proposed general linear cameras (GLCs). We first characterize all possible CLIs in a GLC. We show that a 3D line will appear as a conic in any GLC. For a fixed GLC, the conic type is invariant to the position and orientation of the line and is determined by the GLC parameters. Furthermore, CLIs under single reflection/refraction can only be lines or hyperbolas. Based on our new theory, we develop efficient algorithms to use multiple CLIs to recover the GLC camera parameters. We then apply the curvature-GLC theory to derive the Gaussian and mean curvatures from the GLC intrinsics. This leads to a complete distortion-based reconstruction framework. Unlike conventional correspondence-based approaches that are sensitive to image distortions, our approach benefits from the CLI distortions. Finally, we demonstrate applying our framework for recovering curvature fields on both synthetic and real specular surfaces. <s> BIB003 </s> Ray geometry in non-pinhole cameras: a survey <s> The thin lens operator <s> We present a novel theory for characterizing defocus blurs in multi-perspective cameras such as catadioptric mirrors. Our approach studies how multi-perspective ray geometry transforms under the thin lens. We first use the General Linear Cameras (GLCs) [21] to approximate the incident multi-perspective rays to the lens and then apply a Thin Lens Operator (TLO) to map an incident GLC to the exit GLC. To study defocus blurs caused by the GLC rays, we further introduce a new Ray Spread Function (RSF) model analogous the Point Spread Function (PSF). While PSF models defocus blurs caused by a 3D scene point, RSF models blurs spread by rays. We derive closed form RSFs for incident GLC rays, and we show that for catadioptric cameras with a circular aperture, the RSF can be effectively approximated as a single or mixtures of elliptic-shaped kernels. We apply our method for predicting defocus blurs on commonly used catadioptric cameras and for reducing de-focus blurs in catadioptric projections. Experiments on synthetic and real data demonstrate the accuracy and general applicability of our approach. <s> BIB004
|
Recall that practical pinhole cameras are constructed by using a thin lens in order to collect more lights. Although real lenses are typically a complex assembly of multiple lenses, they can still be effectively modeled using the Thin Lens Equation: where a is the object distance; b is the image distance and f is the thin lens focal length. The thin lens can be viewed as a workhorse that maps each incident ray r = [u, v, s, t] approaching the lens to the exit ray r = [u , v , s , t ] towards the sensor. Ng BIB001 and Ding et al. BIB004 separately derived the Thin Lens Operator (TLO) to show how rays are transformed after passing through a thin lens. By choosing the aperture plane as Π uv at z = 0 and the image sensor plane as Π st at z = 1, we have u = u, v = v. Using Eq. BIB002 , it can be shown that the thin lens operator L transforms the ray coordinates as This reveals the thin lens L behaves as a linear, or more precisely, shear operator on rays, as shown in Fig. 2(b) . For a toy case study, let us investigate how a thin lens transforms a set of incident rays that follow pinhole geometry. Assume the incident rays originate from the CoṖ C = [C x , C y , C z ]. By applying TLO (Eq. (9)) to the pinhole constraints (Eq. (2)), we obtain a new pair of constraints for the exiting rays [u , v , s , t ] as IfĊ does not lie on the focal plane Π L− of the lens at the world side (C z = −f ), then Eq. (10) can be rewritten as Therefore, the exiting rays follow a new set of pinhole constraints with the new CoP at IfĊ lies on Π L− (C z = −f ), then Eq. (10) degenerate to the orthographic constraints: BIB003 In this case, all exiting rays correspond to an orthographic camera with direction [− The results derived above are well-known as they can be directly viewed as the image of a 3D point through the thin lens. Nevertheless, for more complex cases when the incident rays do not follow pinhole geometry, the TLO analysis is crucial for modeling the exit ray geometry BIB004 . This case study also reveals that all rays emitting from a 3D scene pointĊ will generally converge at a different 3D pointĊ through the thin lens. The cone of rays passing througḣ C will therefore spread onto a disk of pixels on the sensor. This process is commonly described using the Point Spread Function (PSF), i.e., the mapping from a 3D point to a disk of pixels. As shown in Fig. 3 , assuming that the sensor moves Δz away from z = C z and the lens has circular aperture with diameter D, then the PSF is a disk of size
|
Ray geometry in non-pinhole cameras: a survey <s> Classical non-pinhole cameras <s> Modeling and analyzing pushbroom sensors commonly used in satellite imagery is difficult and computationally intensive due to the motion of an orbiting satellite with respect to the rotating Earth, and the nonlinearity of the mathematical model involving orbital dynamics. In this paper, a simplified model of a pushbroom sensor (the linear pushbroom model) is introduced. It has the advantage of computational simplicity while at the same time giving very accurate results compared with the full orbiting pushbroom model. Besides remote sensing, the linear pushbroom model is also useful in many other imaging applications. Simple noniterative methods are given for solving the major standard photogrammetric problems for the linear pushbroom model: computation of the model parameters from ground-control points; determination of relative model parameters from image correspondences between two images; and scene reconstruction given image correspondences and ground-control points. The linear pushbroom model leads to theoretical insights that are approximately valid for the full model as well. The epipolar geometry of linear pushbroom cameras is investigated and shown to be totally different from that of a perspective camera. Nevertheless, a matrix analogous to the fundamental matrix of perspective cameras is shown to exist for linear pushbroom sensors. From this it is shown that a scene is determined up to an affine transformation from two views with linear pushbroom cameras. <s> BIB001 </s> Ray geometry in non-pinhole cameras: a survey <s> Classical non-pinhole cameras <s> We introduce anew kind of mosaicing, where the position of the sampling strip varies as a function of the input camera location. The new images that are generated this way correspond to a new projection model defined by two slits, termed here the Crossed-Slits (X-Slits) projection. In this projection model, every 3D point is projected by a ray defined as the line that passes through that point and intersects the two slits. The intersection of the projection rays with the imaging surface defines the image. X-Slits mosaicing provides two benefits. First, the generated mosaics are closer to perspective images than traditional pushbroom mosaics. Second, by simple manipulations of the strip sampling function, we can change the location of one of the virtual slits, providing a virtual walkthrough of a X-Slits camera; all this can be done without recovering any 3D geometry and without calibration. A number of examples where we translate the virtual camera and change its orientation are given; the examples demonstrate realistic changes in parallax, reflections, and occlusions. <s> BIB002
|
Pushbroom cameras, consisting of a linear sensor, are routinely used in satellite imagery BIB001 . The pushbroom sensor is mounted on a moving rail, and as the platform moves, the view plane sweeps out a volume of space and forms a pushbroom image on the sensor. Rays collected by a pushbroom camera should satisfy two constraints: (1) the slit constraint, where the slit is the motion path of the pushbroom sensor; (2) all the sweeping rays are parallel to some plane that is perpendicular to the slit. Assume the common slit is parallel to Π uv and Π st and we parameterize it with a point where the first constraint is the parallel slit constraint and the second corresponds to the parallel sweeping planes, both linear. In practice, a pushbroom image can be synthesized by moving a perspective camera along a linear path and assembling the same column of each perspective image as shown in Fig. 4 (a) and (b). Another popular class of non-pinhole cameras are the XSlit cameras. An XSlit camera has two oblique (neither parallel nor coplanar) slits in 3D space. The camera collects rays that simultaneously pass through the two slits and projects them onto an image plane. If we choose the parametrization plane parallel to both slits, rays in an XSlit camera will then satisfy two parallel slit constraints, i.e., two linear constraints. Similar to pushbroom images, XSlit images can also be synthesized using images captured by a moving pinhole camera. Zomet et al. BIB002 generated XSlit images by stitching linearly varying columns across a row of pinhole images, as shown in Fig. 4 (c) and (d).
|
Ray geometry in non-pinhole cameras: a survey <s> General linear cameras (GLC) <s> Modeling and analyzing pushbroom sensors commonly used in satellite imagery is difficult and computationally intensive due to the motion of an orbiting satellite with respect to the rotating Earth, and the nonlinearity of the mathematical model involving orbital dynamics. In this paper, a simplified model of a pushbroom sensor (the linear pushbroom model) is introduced. It has the advantage of computational simplicity while at the same time giving very accurate results compared with the full orbiting pushbroom model. Besides remote sensing, the linear pushbroom model is also useful in many other imaging applications. Simple noniterative methods are given for solving the major standard photogrammetric problems for the linear pushbroom model: computation of the model parameters from ground-control points; determination of relative model parameters from image correspondences between two images; and scene reconstruction given image correspondences and ground-control points. The linear pushbroom model leads to theoretical insights that are approximately valid for the full model as well. The epipolar geometry of linear pushbroom cameras is investigated and shown to be totally different from that of a perspective camera. Nevertheless, a matrix analogous to the fundamental matrix of perspective cameras is shown to exist for linear pushbroom sensors. From this it is shown that a scene is determined up to an affine transformation from two views with linear pushbroom cameras. <s> BIB001 </s> Ray geometry in non-pinhole cameras: a survey <s> General linear cameras (GLC) <s> Mosaics acquired by pushbroom cameras, stereo panoramas, omnivergent mosaics, and spherical mosaics can be viewed as images taken by non-central cameras, i.e. cameras that project along rays that do not all intersect at one point. It has been shown that in order to reduce the correspondence search in mosaics to a one-parametric search along curves, the rays of the non-central cameras have to lie in double ruled epipolar surfaces. In this work, we introduce the oblique stereo geometry, which has nonintersecting double ruled epipolar surfaces. We analyze the configurations of mutually oblique rays that see every point in space. We call such configurations oblique cameras. We argue that oblique cameras are important because they are the most non-central cameras among all cameras. We show that oblique cameras, and the corresponding oblique stereo geometry, exist and give an example of a physically realizable oblique stereo geometry. <s> BIB002 </s> Ray geometry in non-pinhole cameras: a survey <s> General linear cameras (GLC) <s> We introduce anew kind of mosaicing, where the position of the sampling strip varies as a function of the input camera location. The new images that are generated this way correspond to a new projection model defined by two slits, termed here the Crossed-Slits (X-Slits) projection. In this projection model, every 3D point is projected by a ray defined as the line that passes through that point and intersects the two slits. The intersection of the projection rays with the imaging surface defines the image. X-Slits mosaicing provides two benefits. First, the generated mosaics are closer to perspective images than traditional pushbroom mosaics. Second, by simple manipulations of the strip sampling function, we can change the location of one of the virtual slits, providing a virtual walkthrough of a X-Slits camera; all this can be done without recovering any 3D geometry and without calibration. A number of examples where we translate the virtual camera and change its orientation are given; the examples demonstrate realistic changes in parallax, reflections, and occlusions. <s> BIB003 </s> Ray geometry in non-pinhole cameras: a survey <s> General linear cameras (GLC) <s> We present a General Linear Camera (GLC) model that unifies many previous camera models into a single representation. The GLC model is capable of describing all perspective (pinhole), orthographic, and many multiperspective (including pushbroom and two-slit) cameras, as well as epipolar plane images. It also includes three new and previously unexplored multiperspective linear cameras. Our GLC model is both general and linear in the sense that, given any vector space where rays are represented as points, it describes all 2D affine subspaces (planes) that can be formed by affine combinations of 3 rays. The incident radiance seen along the rays found on subregions of these 2D affine subspaces are a precise definition of a projected image of a 3D scene. The GLC model also provides an intuitive physical interpretation, which can be used to characterize real imaging systems. Finally, since the GLC model provides a complete description of all 2D affine subspaces, it can be used as a tool for first-order differential analysis of arbitrary (higher-order) multiperspective imaging systems. <s> BIB004 </s> Ray geometry in non-pinhole cameras: a survey <s> General linear cameras (GLC) <s> We present a new shape-from-distortion framework for recovering specular (reflective/refractive) surfaces. While most existing approaches rely on accurate correspondences between 2D pixels and 3D points, we focus on analyzing the curved images of 3D lines which we call curved line images or CLIs. Our approach models CLIs of local reflections or refractions using the recently proposed general linear cameras (GLCs). We first characterize all possible CLIs in a GLC. We show that a 3D line will appear as a conic in any GLC. For a fixed GLC, the conic type is invariant to the position and orientation of the line and is determined by the GLC parameters. Furthermore, CLIs under single reflection/refraction can only be lines or hyperbolas. Based on our new theory, we develop efficient algorithms to use multiple CLIs to recover the GLC camera parameters. We then apply the curvature-GLC theory to derive the Gaussian and mean curvatures from the GLC intrinsics. This leads to a complete distortion-based reconstruction framework. Unlike conventional correspondence-based approaches that are sensitive to image distortions, our approach benefits from the CLI distortions. Finally, we demonstrate applying our framework for recovering curvature fields on both synthetic and real specular surfaces. <s> BIB005
|
To study ray geometry of local ray tangent plane, Yu and McMillan BIB004 developed a new camera model called the General Linear Camera (GLC). GLCs are 2D planar ray manifolds which can apparently describe the traditional pinhole, orthographic, pushbroom, and XSlit cameras. A GLC is defined as the affine combination of three generator rays For example, in the ray tangent plane analysis, the three ray generators are chosen as r, r + d 1 and r + d 2 . To determine the type of the non-pinhole camera for any GLC specification, they further derived a ray characteristic equation that computes how many singularities (lines or points) that all rays in the GLC can pass through: Equation (17) yields a quadratic equation of the form Aλ 2 + Bλ + C = 0 where An edge parallel condition is defined to check if all three pairs of the corresponding edges of the u − v and s − t triangles formed by the generator rays are parallel: Given three generator rays, its GLC type can be determined by the A coefficient and the discriminant Δ = B 2 − 4AC of its characteristic equation and the edge parallel condition, shown in Table 1 . Yu and McMillan BIB004 have shown that there are precisely eight types of GLC as shown in Fig. 5 : in a pinhole camera, all rays pass through a single point; in an orthographic camera, all rays are parallel; in a pushbroom camera BIB001 , all rays lie on a set of parallel planes and pass through a line; in an XSlit camera BIB003 , all rays pass through two non-coplanar lines; in a pencil camera, all coplanar rays originate from a point on a line and lie on a specific plane through the line; in a twisted orthographic camera, all rays lie on parallel twisted planes and no rays intersect; in a bilinear camera BIB002 , no two rays are coplanar and no two rays intersect; and in an EPI camera, all rays lie on a 2D plane. To find the projection of a 3D point in a GLC, one can combine the GLC constraints with pinhole constraints. For example, considering an XSlit camera that obeys two parallel slit constraints (Eq. (5)) derived in Sect. 2.1.3. Rays passing through the 3D point obeys another two pinhole linear constraints (Eq. (2)). We therefore can uniquely determine the ray from the XSlit that passes through the 3D point. To calculate the projection of a 3D line in the XSlit camera, one can compute the projection of each point on the line. Ding et al. BIB005 show that line projections can only be lines or conics, as shown in Fig. 6 . The complete classification of conics that can be observed by each type of GLC is enumerated in Table 2 .
|
Ray geometry in non-pinhole cameras: a survey <s> Case study 2: 3D surfaces <s> This paper proposes a unified and consistent set of flexible tools to approximate important geometric attributes, including normal vectors and curvatures on arbitrary triangle meshes. We present a consistent derivation of these first and second order differential properties using averaging Voronoi cells and the mixed Finite-Element/Finite-Volume method, and compare them to existing formulations. Building upon previous work in discrete geometry, these operators are closely related to the continuous case, guaranteeing an appropriate extension from the continuous to the discrete setting: they respect most intrinsic properties of the continuous differential operators. We show that these estimates are optimal in accuracy under mild smoothness conditions, and demonstrate their numerical quality. We also present applications of these operators, such as mesh smoothing, enhancement, and quality checking, and show results of denoising in higher dimensions, such as for tensor images. <s> BIB001 </s> Ray geometry in non-pinhole cameras: a survey <s> Case study 2: 3D surfaces <s> We address the problem of curvature estimation from sampled smooth surfaces. Building upon the theory of normal cycles, we derive a definition of the curvature tensor for polyhedral surfaces. This definition consists in a very simple and new formula. When applied to a polyhedral approximation of a smooth surface, it yields an efficient and reliable curvature estimation algorithm. Moreover, we bound the difference between the estimated curvature and the one of the smooth surface in the case of restricted Delaunay triangulations. <s> BIB002 </s> Ray geometry in non-pinhole cameras: a survey <s> Case study 2: 3D surfaces <s> The differential geometry of smooth three-dimensional surfaces can be interpreted from one of two perspectives: in terms of oriented frames located on the surface, or in terms of a pair of associated focal surfaces. These focal surfaces are swept by the loci of the principal curvatures' radii. In this article, we develop a focal-surfacebased differential geometry interpretation for discrete mesh surfaces. Focal surfaces have many useful properties. For instance, the normal of each focal surface indicates a principal direction of the corresponding point on the original surface. We provide algorithms to robustly approximate the focal surfaces of a triangle mesh with known or estimated normals. Our approach locally parameterizes the surface normals about a point by their intersections with a pair of parallel planes. We show neighboring normal triplets are constrained to pass simultaneously through two slits, which are parallel to the specified parametrization planes and rule the focal surfaces. We develop both CPU and GPU-based algorithms to efficiently approximate these two slits and, hence, the focal meshes. Our focal mesh estimation also provides a novel discrete shape operator that simultaneously estimates the principal curvatures and principal directions. <s> BIB003
|
It is also possible to convert a 3D surface to a 2D ray manifold. Yu et al. BIB003 proposed a normal-ray model to represent surfaces that locally parameterize the surface about its normal based on focal surface approximation, as shown in Fig. 7(a)-(d) . Given a smooth surface S(x, y), at each vertexṖ , we orient the local frame to align z = 0 with the tangent plane atṖ . We further assumeṖ is the origin of z = 0 plane and set Π uv , Π st at z = 0 and z = 1, respectively. Under this parametrization, normal rays can be mapped as n = [u, v, s, t]. The tangent plane can then be represented by a GLC with three rays: n, n + n x and n + n y . By using the GLC analysis, one can compute the two slits for each normal ray GLC from the characteristic equation. Yu et al. BIB003 have shown that the two slits are perpendicular BIB002 on different parts of the model to each other and rule the focal surfaces. Swept by the loci of the principal curvatures' radii, the focal surfaces encapsulate many useful geometric properties of the corresponding actual surface. For example, normals of the actual surface are tangent to both focal surfaces and the normal of each focal surface indicates a principle direction of the corresponding point on the original surface. In fact, each slit is tangential to its corresponding focal surface. Since the two focal surfaces are perpendicular to each other, one slit is parallel to the normal of the focal plane that the other slit corresponds to. Therefore, the two slits give us the principal directions of the original surface. Besides, the depths of the slits/focal surfaces, computed as the roots of the characteristic equation, represent the principle curvatures of the surface. In Fig. 7 (e) and (f), we shows two results of estimated mean curvature and min principle curvature using normal ray model, comparing to the Voronoi-edge algorithm BIB002 BIB001 .
|
Ray geometry in non-pinhole cameras: a survey <s> Case study 3: defocus analysis in catadioptric cameras <s> We introduce the occlusion camera: a non-pinhole camera with 3D distorted rays. Some of the rays sample surfaces that are occluded in the reference view, while the rest sample visible surfaces. The extra samples alleviate disocclusion errors. The silhouette curves are pushed back, so nearly visible samples become visible. A single occlusion camera covers the entire silhouette of an object, whereas many depth images are required to achieve the same effect. Like regular depth images, occlusion-camera images have a single layer thus the number of samples they contain is bounded by the image resolution, and connectivity is defined implicitly. We construct and use occlusion-camera images in hardware. An occlusion-camera image does not guarantee that all disocclusion errors are avoided. Objects with complex geometry are rendered using the union of the samples stored by a planar pinhole camera and an occlusion camera depth image. <s> BIB001 </s> Ray geometry in non-pinhole cameras: a survey <s> Case study 3: defocus analysis in catadioptric cameras <s> We present a novel theory for characterizing defocus blurs in multi-perspective cameras such as catadioptric mirrors. Our approach studies how multi-perspective ray geometry transforms under the thin lens. We first use the General Linear Cameras (GLCs) [21] to approximate the incident multi-perspective rays to the lens and then apply a Thin Lens Operator (TLO) to map an incident GLC to the exit GLC. To study defocus blurs caused by the GLC rays, we further introduce a new Ray Spread Function (RSF) model analogous the Point Spread Function (PSF). While PSF models defocus blurs caused by a 3D scene point, RSF models blurs spread by rays. We derive closed form RSFs for incident GLC rays, and we show that for catadioptric cameras with a circular aperture, the RSF can be effectively approximated as a single or mixtures of elliptic-shaped kernels. We apply our method for predicting defocus blurs on commonly used catadioptric cameras and for reducing de-focus blurs in catadioptric projections. Experiments on synthetic and real data demonstrate the accuracy and general applicability of our approach. <s> BIB002
|
Based on the GLC-TLO analysis, Ding et al. BIB002 showcased using the theory for characterizing and compensating catadioptric defocusing. They use the Ray Spread Function (RSF) to describe how a general set of incident rays spread to pixels on the sensor. The classical PSF is a special case of the RSF when the incident rays are from a pinhole camera. Fig. 9 The formation of RSF in a catadioptric imaging system: light from a scene point is reflected off the mirror, truncated by the thin lens aperture, and finally received by the sensor forming the RSF Assume a scene pointṖ and a curved mirror surface z(x, y), the RSF ofṖ is formed by rays emitting fromṖ reflected off the mirror, then transmitted through the lens and finally received by the sensor, as shown in Using the reflection analysis in Sect. 3.2.2, one can decompose each local reflection patch as an XSlit camera. It is particulary useful to analyze the RSF of an XSlit GLC. According to the GLC-TLO transformation, the exit GLC is also an XSlit with two slits l 1 and l 2 lying on z = λ 1 and z = λ 2 , respectively. To simplify the analysis, let us consider the special case when the two slits are orthogonal to each other. One can further rotate the coordinate system such that the slit directions are aligned with the u, v axis. The two slits constraints can then be rewritten as BIB001 Substituting Eq. (27) into the aperture constraint G(u, v), we have Equation (28) indicates that the RSF of a GLC is of elliptical shape and the major and minor radii of the ellipse are | The analysis reveals that the RSF caused by a 3D point in a catadioptric mirror can only be an ellipse, a circle, or a line segment. Furthermore, the shape of the RSF depends on the location of the scene point, size of the aperture and the camera's focus setting.
|
Ray geometry in non-pinhole cameras: a survey <s> Synthesizing panoramas <s> We describe a new approach for simulating apparent camera motion through a 3D environment. The approach is motivated by a traditional technique used in 2D cel animation, in which a single background image, which we call a multiperspective panorama, is used to incorporate multiple views of a 3D environment as seen from along a given camera path. When viewed through a small moving window, the panorama produces the illusion of 3D motion. In this paper, we explore how such panoramas can be designed by computer, and we examine their application to cel animation in particular. Multiperspective panoramas should also be useful for any application in which predefined camera moves are applied to 3D scenes, including virtual reality fly-throughs, computer games, and architectural walk-throughs. CR Categories: I.3.3 [Computer Graphics]: Picture/Image Generation. Additional <s> BIB001 </s> Ray geometry in non-pinhole cameras: a survey <s> Synthesizing panoramas <s> Image mosaicing is commonly used to increase the visual field of view by pasting together many images or video frames. Existing mosaicing methods are based on projecting all images onto a predetermined single manifold: A plane is commonly used for a camera translating sideways, a cylinder is used for a panning camera, and a sphere is used for a camera which is both panning and tilting. While different mosaicing methods should therefore be used for different types of camera motion, more general types of camera motion, such as forward motion, are practically impossible for traditional mosaicing. A new methodology to allow image mosaicing in more general cases of camera motion is presented. Mosaicing is performed by projecting thin strips from the images onto manifolds which are adapted to the camera motion. While the limitations of existing mosaicing techniques are a result of using predetermined manifolds, the use of more general manifolds overcomes these limitations. <s> BIB002 </s> Ray geometry in non-pinhole cameras: a survey <s> Synthesizing panoramas <s> Conventional vision systems and algorithms assume the camera to have a single viewpoint. However, sensors need not always maintain a single viewpoint. For instance, an incorrectly aligned system could cause non-single viewpoints. Also, systems could be designed to specifically deviate from a single viewpoint to trade-off image characteristics such as resolution and field of view. In these cases, the locus of viewpoints forms what is called a caustic. In this paper, we present an in-depth analysis of caustics of catadioptric cameras with conic reflectors. Properties of caustics with respect to field of view and resolution are presented. Finally, we present ways to calibrate conic catadioptric systems and estimate their caustics from known camera motion. <s> BIB003 </s> Ray geometry in non-pinhole cameras: a survey <s> Synthesizing panoramas <s> The automatic construction of large, high-resolution image mosaics is an active area of research in the fields of photogrammetry, computer vision, image processing, and computer graphics. Image mosaics can be used for many different applications [163, 1.22]. The most traditional application is the construction of large aerial and satellite photographs from collections of images [186]. More recent applications include scene stabilization and change detection [93], video compression [125, 122, 167] and video indexing [240], increasing the field of view [105, 177, 266] and resolution [126, 50] of a camera, and even simple photo editing [38]. A particularly popular application is the emulation of traditional film-based panoramic photography [175] with digital panoramic mosaics, for applications such as the construction of virtual environments [181, 267] and virtual travel [49]. <s> BIB004 </s> Ray geometry in non-pinhole cameras: a survey <s> Synthesizing panoramas <s> We introduce anew kind of mosaicing, where the position of the sampling strip varies as a function of the input camera location. The new images that are generated this way correspond to a new projection model defined by two slits, termed here the Crossed-Slits (X-Slits) projection. In this projection model, every 3D point is projected by a ray defined as the line that passes through that point and intersects the two slits. The intersection of the projection rays with the imaging surface defines the image. X-Slits mosaicing provides two benefits. First, the generated mosaics are closer to perspective images than traditional pushbroom mosaics. Second, by simple manipulations of the strip sampling function, we can change the location of one of the virtual slits, providing a virtual walkthrough of a X-Slits camera; all this can be done without recovering any 3D geometry and without calibration. A number of examples where we translate the virtual camera and change its orientation are given; the examples demonstrate realistic changes in parallax, reflections, and occlusions. <s> BIB005 </s> Ray geometry in non-pinhole cameras: a survey <s> Synthesizing panoramas <s> A theory of stereo image formation is presented that enables a complete classification of all possible stereo views, including non-perspective varieties. Towards this end, the notion of epipolar geometry is generalized to apply to multiperspective images. It is shown that any stereo pair must consist of rays lying on one of three varieties of quadric surfaces. A unified representation is developed to model all classes of stereo views, based on the concept of a quadric view. The benefits include a unified treatment of projection and triangulation operations for all stereo views. The framework is applied to derive new types of stereo image representations with unusual and useful properties. Experimental examples of these images are constructed and used to obtain 3D binocular object reconstructions. <s> BIB006 </s> Ray geometry in non-pinhole cameras: a survey <s> Synthesizing panoramas <s> We present a novel method for analyzing reflections on arbitrary surfaces. We model reflections using a broader than usual class of imaging models, which include both perspective and multiperspective camera types. We provide an analytical framework to locally model reflections as specific multiperspective cameras around every ray based on a new theory of general linear cameras. Our framework better characterizes the complicated image distortions seen on irregular mirror surfaces as well as the conventional catadioptric mirrors. We show the connection between multiperspective camera models and caustic surfaces of reflections and demonstrate how they reveal important surface rulings of the caustics. Finally, we show how to use our analysis to assist mirror design and characterize distortions seen in catadioptric imaging systems. <s> BIB007 </s> Ray geometry in non-pinhole cameras: a survey <s> Synthesizing panoramas <s> We present a system for producing multi-viewpoint panoramas of long, roughly planar scenes, such as the facades of buildings along a city street, from a relatively sparse set of photographs captured with a handheld still camera that is moved along the scene. Our work is a significant departure from previous methods for creating multi-viewpoint panoramas, which composite thin vertical strips from a video sequence captured by a translating video camera, in that the resulting panoramas are composed of relatively large regions of ordinary perspective. In our system, the only user input required beyond capturing the photographs themselves is to identify the dominant plane of the photographed scene; our system then computes a panorama automatically using Markov Random Field optimization. Users may exert additional control over the appearance of the result by drawing rough strokes that indicate various high-level goals. We demonstrate the results of our system on several scenes, including urban streets, a river bank, and a grocery store aisle. <s> BIB008 </s> Ray geometry in non-pinhole cameras: a survey <s> Synthesizing panoramas <s> A conventional pinhole camera captures only a small fraction of a 3-D scene due to occlusions. We introduce the graph camera, a non-pinhole with rays that circumvent occluders to create a single layer image that shows simultaneously several regions of interest in a 3-D scene. The graph camera image exhibits good continuity and little redundancy. The graph camera model is literally a graph of tens of planar pinhole cameras. A fast projection operation allows rendering in feed-forward fashion, at interactive rates, which provides support for dynamic scenes. The graph camera is an infrastructure level tool with many applications. We explore the graph camera benefits in the contexts of virtual 3-D scene exploration and summarization, and in the context of real-world 3-D scene visualization. The graph camera allows integrating multiple video feeds seamlessly, which enables monitoring complex real-world spaces with a single image. <s> BIB009
|
A non-pinhole camera can combine patches from multiple pinhole cameras into a single image to overcome the FoV limits. In image-based rendering, pushbroom and XSlit panoramas can be synthesized by translating a pinhole camera along the image plane and then stitching specific columns from each perspective image. Pushbroom image assembles the same column BIB006 whereas the XSlit linearly varies the column BIB005 . The synthesized panoramas can exhibit image distortions such as apparent stretching and shrinking, and even duplicated projections of a single point BIB003 BIB007 . To alleviate the distortions, Agarwala et al. BIB008 constructed panorama using arbitrarily shaped regions of the source images taken by a pinhole camera moving along a straight path, instead of selecting simple strips. The region shape in each perspective image is carefully chosen by using Markov Random Field (MRF) optimization based on various properties that the panorama desires. Instead of translating the camera planarly, Shum and Szeliski BIB004 created panorama on a cylindrical manifold by panning a pinhole camera around its optical center. They project the perspective images to a common cylinder to combine the final panorama. Peleg et al. BIB002 proposed a mosaicing method for more general camera motion. They first determine the projection manifolds according to the camera motion and then warp the source images onto the manifolds to stitch the panorama. Non-pinhole camera models are also widely used for creating computer generated panoramas. The 1940 Disney animation Pinocchio opens with a virtual camera flying over a small village. Instead of traditional panning, the camera rotates at the same time, creating astonishing 3D effect via 2D painting. In fact, the shot was made by drawing a panoramic view with "warped perspective" as shown in Fig. 11 and then showing only a small clip at a time. Wood et al. BIB001 proposed to create similar cel animation effect from 3D models. They combined elements of multiple pinhole strips into a single image using a semi-automatic image registration process. Their method relies on optimization techniques as well as optical flow and blending transitions between views. Popescu et al. BIB009 proposed the graph camera for generating a single panoramic image that simultaneously captures/renders regions of interest of a 3D scene from different perspectives. Conceptually, the graph camera is a combination of different pinhole cameras that sample the scene. A non-perspective panorama can then be generated A multi-perspective image rendered using GLC framework . (c) Extracted images from a faux-animation generated by . The source images were acquired by rotating a ceramic figure on a turntable. Multi-perspective renderings were used to turn the head and hind quarters of the figure in a fake image-based animation by elaborately stitching the boundary of multiple pinhole images. The viewing continuity with minimum redundancy is achieved through a sequence of pinhole frustum bending, splitting and merging. The panoramic rendering can then be used in 3D scene exploration, summarization and visualization.
|
Ray geometry in non-pinhole cameras: a survey <s> Non-photorealistic rendering <s> We introduce anew kind of mosaicing, where the position of the sampling strip varies as a function of the input camera location. The new images that are generated this way correspond to a new projection model defined by two slits, termed here the Crossed-Slits (X-Slits) projection. In this projection model, every 3D point is projected by a ray defined as the line that passes through that point and intersects the two slits. The intersection of the projection rays with the imaging surface defines the image. X-Slits mosaicing provides two benefits. First, the generated mosaics are closer to perspective images than traditional pushbroom mosaics. Second, by simple manipulations of the strip sampling function, we can change the location of one of the virtual slits, providing a virtual walkthrough of a X-Slits camera; all this can be done without recovering any 3D geometry and without calibration. A number of examples where we translate the virtual camera and change its orientation are given; the examples demonstrate realistic changes in parallax, reflections, and occlusions. <s> BIB001 </s> Ray geometry in non-pinhole cameras: a survey <s> Non-photorealistic rendering <s> We introduce the occlusion camera: a non-pinhole camera with 3D distorted rays. Some of the rays sample surfaces that are occluded in the reference view, while the rest sample visible surfaces. The extra samples alleviate disocclusion errors. The silhouette curves are pushed back, so nearly visible samples become visible. A single occlusion camera covers the entire silhouette of an object, whereas many depth images are required to achieve the same effect. Like regular depth images, occlusion-camera images have a single layer thus the number of samples they contain is bounded by the image resolution, and connectivity is defined implicitly. We construct and use occlusion-camera images in hardware. An occlusion-camera image does not guarantee that all disocclusion errors are avoided. Objects with complex geometry are rendered using the union of the samples stored by a planar pinhole camera and an occlusion camera depth image. <s> BIB002 </s> Ray geometry in non-pinhole cameras: a survey <s> Non-photorealistic rendering <s> Images that seamlessly combine views at different levels of detail are appealing. However, creating such multiscale images is not a trivial task, and most such illustrations are handcrafted by skilled artists. This paper presents a framework for direct multiscale rendering of geometric and volumetric models. The basis of our approach is a set of non-linearly bent camera rays that smoothly cast through multiple scales. We show that by properly setting up a sequence of conventional pinhole cameras to capture features of interest at different scales, along with image masks specifying the regions of interest for each scale on the projection plane, our rendering framework can generate non-linear sampling rays that smoothly project objects in a scene at multiple levels of detail onto a single image. We address two important issues with non-linear camera projection. First, our streamline-based ray generation algorithm avoids undesired camera ray intersections, which often result in unexpected images. Second, in order to maintain camera ray coherence and preserve aesthetic quality, we create an interpolated 3D field that defines the contribution of each pinhole camera for determining ray orientations. The resulting multiscale camera has three main applications: (1) presenting hierarchical structure in a compact and continuous manner, (2) achieving focus+context visualization, and (3) creating fascinating and artistic images. <s> BIB003
|
Rendering perspectives from multiple viewpoints can be combined in ways other than panoramas. By making subtle changes in viewing direction across the imaging plane it is possible to depict more of scene than could be seen from a single point of view. Such images differ from panoramas in that they are intended to be viewed as a whole. Neo-cubism is an example. Many of the works of Picasso are examples of such nonperspective images. Figure 12 (a) and (b) compare one of Picasso's paintings with an image synthesized using the GLC framework . Starting from a simple layout, it achieves similar multi-perspective effects. It is also possible to use multi-perspective rendering to create fake or fauxanimations from still-life scenes. This is particularly useful for animating image-based models. Figure 12(c) show three frames from a synthesized animation, each of which corresponds to a multi-perspective image rendered from a 3D light field. Zomet BIB001 used a similar approach by using a single XSlit camera to achieve rotation effects. Mei et al. BIB002 defined an occlusion camera that can sample visible surfaces as well as occluded ones in the reference view, to allow re-rendering new views with correct occlusions. Their occlusion camera bends the rays towards a central axis (the pole) to sample the hidden surfaces in the reference view. A 3D radial distortion centered at the pole allows the occlusion camera to see around occluders along the pole. Such distortion pulls out hidden samples according to their depth: the larger the depth, the larger the sample will be displaced. Therefore, samples that are on the same ray in a conventional perspective camera are separated to different locations in the distorted occlusion camera image according to their depth. In this way, hidden samples that are close to the silhouette becomes visible in the occlusion camera reference image. Hsu et al. BIB003 recently proposed a multi-scale rendering framework that can render objects smoothly at multiple levels of details in a single image. They set up a sequence of pinhole cameras to render objects at different scales of interest and use a user-specified mask to determine regions to be displayed in each view. The final multi-scale image is rendered by reprojecting the images of multi-scale cameras to the one with the largest scale and use Bezier curve-based non-linear ray casting to ensure coherent transition between each scale view. Their technique can achieve focus plus context visualization and is useful in scientific visualization and artistic rendering.
|
Ray geometry in non-pinhole cameras: a survey <s> Stereo and 3D reconstruction <s> Full panoramic images, covering 360 degrees, can be created either by using panoramic cameras or by mosaicing together many regular images. Creating panoramic views in stereo, where one panorama is generated for the left eye, and another panorama is generated for the right eye is more problematic. Earlier attempts to mosaic images from a rotating pair of stereo cameras faced severe problems of parallax and of scale changes. A new family of multiple viewpoint image projections, the Circular Projections, is developed. Two panoramic images taken using such projections can serve as a panoramic stereo pair. A system is described to generates a stereo panoramic image using circular projections from images or video taken by a single rotating camera. The system works in real-time on a PC. It should be noted that the stereo images are created without computation of 3D structure, and the depth effects are created only in the viewer's brain. <s> BIB001 </s> Ray geometry in non-pinhole cameras: a survey <s> Stereo and 3D reconstruction <s> We analyze the geometry of the two-slit camera and come to two conclusions. First, we show that the definition given by et al. [9] makes sense only if the two slits are not intersecting. Secondly, we prove that the complete image from a two-slit camera cannot be obtained as an intersection of the rays of the two-slit camera with a plane in space. Motivated by the quest for a unified representation of various cameras by simple geometrical objects, we give a new definition of linear oblique cameras as those which comprise all real lines incident with some non-real line and show that it is equivalent with the definition we gave earlier. We also show that no single line neither in the real projective space nor in its coplexification, can be used to define analogously a two-slit camera. <s> BIB002 </s> Ray geometry in non-pinhole cameras: a survey <s> Stereo and 3D reconstruction <s> The Crossed-Slits (X-Slits) camera is defined by two nonintersecting slits, which replace the pinhole in the common perspective camera. Each point in space is projected to the image plane by a ray which passes through the point and the two slits. The X-Slits projection model includes the pushbroom camera as a special case. In addition, it describes a certain class of panoramic images, which are generated from sequences obtained by translating pinhole cameras. In this paper we develop the epipolar geometry of the X-Slits projection model. We show an object which is similar to the fundamental matrix; our matrix, however, describes a quadratic relation between corresponding image points (using the Veronese mapping). Similarly the equivalent of epipolar lines are conics in the image plane. Unlike the pin-hole case, epipolar surfaces do not usually exist in the sense that matching epipolar lines lie on a single surface; we analyze the cases when epipolar surfaces exist, and characterize their properties. Finally, we demonstrate the matching of points in pairs of X-Slits panoramic images. <s> BIB003 </s> Ray geometry in non-pinhole cameras: a survey <s> Stereo and 3D reconstruction <s> A theory of stereo image formation is presented that enables a complete classification of all possible stereo views, including non-perspective varieties. Towards this end, the notion of epipolar geometry is generalized to apply to multiperspective images. It is shown that any stereo pair must consist of rays lying on one of three varieties of quadric surfaces. A unified representation is developed to model all classes of stereo views, based on the concept of a quadric view. The benefits include a unified treatment of projection and triangulation operations for all stereo views. The framework is applied to derive new types of stereo image representations with unusual and useful properties. Experimental examples of these images are constructed and used to obtain 3D binocular object reconstructions. <s> BIB004 </s> Ray geometry in non-pinhole cameras: a survey <s> Stereo and 3D reconstruction <s> Human stereo vision works by fusing a pair of perspective images with a purely horizontal parallax. Recent developments suggest that very few varieties of multiperspective stereo pairs exist. In this paper, we introduce a new stereo model, which we call epsilon stereo pairs, for fusing a broader class of multiperspective images. An epsilon stereo pair consists of two images with a slight vertical parallax. We show many multiperspective camera pairs that do not satisfy the stereo constraint can still form epsilon stereo pairs. We then introduce a new ray-space warping algorithm to minimize stereo inconsistencies in an epsilon pair using multiperspective collineations. This makes epsilon stereo model a promising tool for synthesizing close-to-stereo fusions from many non-stereo pairs. <s> BIB005 </s> Ray geometry in non-pinhole cameras: a survey <s> Stereo and 3D reconstruction <s> We consider the problem of capturing shape characteristics on specular (refractive and reflective) surfaces that are nearly flat. These surfaces are difficult to model using traditional methods based on reconstructing the surface positions and normals. These lower-order shape attributes provide little information to identify important surface characteristics related to distortions. In this paper, we present a framework for recovering the higher-order geometry attributes of specular surfaces. Our method models local reflections and refractions in terms of a special class of multiperspective cameras called the general linear cameras (GLCs). We then develop a new theory that correlates the higher-order differential geometry attributes with the local GLCs. Specifically, we show that Gaussian and mean curvature can be directly derived from the camera intrinsics of the local GLCs. We validate this theory on both synthetic and real-world specular surfaces. Our method places a known pattern in front of a reflective surface or beneath a refractive surface and captures a distorted image on the surface. We then compute the optimal GLC using a sparse set of correspondences and recover the curvatures from the GLC. Experiments demonstrate that our methods are robust and highly accurate. <s> BIB006 </s> Ray geometry in non-pinhole cameras: a survey <s> Stereo and 3D reconstruction <s> We present a new shape-from-distortion framework for recovering specular (reflective/refractive) surfaces. While most existing approaches rely on accurate correspondences between 2D pixels and 3D points, we focus on analyzing the curved images of 3D lines which we call curved line images or CLIs. Our approach models CLIs of local reflections or refractions using the recently proposed general linear cameras (GLCs). We first characterize all possible CLIs in a GLC. We show that a 3D line will appear as a conic in any GLC. For a fixed GLC, the conic type is invariant to the position and orientation of the line and is determined by the GLC parameters. Furthermore, CLIs under single reflection/refraction can only be lines or hyperbolas. Based on our new theory, we develop efficient algorithms to use multiple CLIs to recover the GLC camera parameters. We then apply the curvature-GLC theory to derive the Gaussian and mean curvatures from the GLC intrinsics. This leads to a complete distortion-based reconstruction framework. Unlike conventional correspondence-based approaches that are sensitive to image distortions, our approach benefits from the CLI distortions. Finally, we demonstrate applying our framework for recovering curvature fields on both synthetic and real specular surfaces. <s> BIB007
|
Traditional stereo matching algorithms for pinhole cameras have also been extended to non-pinhole geometry. Seitz BIB004 and Pajdla BIB002 independently studied all possible non-pinhole camera pairs that can have epipolar geometry. Their work suggests that only three varieties of epipo- Fig. 13 Epsilon stereo matching on two XSlit cameras. From top to bottom: (a) shows one of the two XSlit images; (b) shows the ground truth depth map; (c) shows the recovered disparity map by treating the two images as a stereo pair and applying the graph cut algorithm; (d) shows the horizontal disparity map recovered by the epsilon stereo mapping algorithm lar geometry exist: planes, hyperboloids, and hyperbolicparaboloids, all corresponding to doubly ruled surfaces. Peleg et al. BIB001 stitched the same column of images from a rotating pinhole camera to form a circular pushbroom. They then fused two oblique circular pushbrooms to synthesize a stereo panorama. Feldman et al. BIB003 proved that a pair of XSlit cameras can have valid epipolar geometry if they share a slit or the slits intersect in four pairwise distinct points. However, Seitz and Pajdla's results also reveal that very few varieties of multi-perspective stereo pairs exist. Ding and Yu BIB005 introduced a new near stereo model, which they call epsilon stereo pairs. An epsilon stereo pair consists of two non-pinhole images with a slight vertical parallax. They have shown that many non-pinhole camera pairs that do not satisfy the stereo constraint can still form epsilon stereo pairs. They then have introduced a new ray-space warping algorithm to minimize stereo inconsistencies in an epsilon pair using non-pinhole collineations (homograph) which makes epsilon stereo model a promising tool for synthesizing close-to-stereo fusions from many non-stereo pairs, as shown in Fig. 13 . Most recently, Kim et al. presented a method for generating near stereoscopic view by cutting the light field. They compute the stereoscopy as the optimal cut through the light field under the depth budget, maximum disparity gradient, and desired stereoscopic baseline. A special class of non-pinhole cameras are reflective and refractive surfaces. One can then view the surface reconstruction problem as the camera calibration problem. Ding et al. BIB006 BIB007 proposed a shape-from-distortion framework for recovering specular (reflective/refractive) surfaces by analyzing the local reflection GLCs and curved line images. In BIB006 , they focused on recovering a special type of surface: near-flat surfaces such as the windows and relatively flat water surfaces. Such surfaces are difficult to model because lower-order surface attributes provide little information. They divide the specular surface into piecewise triangles and estimate each local reflection GLCs for recovering high-order surface properties such as curvatures. In BIB007 , the authors have further shown how to analyze the curving of lines to recover the GLC parameters and then the surface attributes. Fig. 14 (a) A typical catadioptric image with wide FoV. (b) Forward Projection: Given a scene point P , the mirror surface and the camera, find its projection in the viewing camera after reflection. It is crucial to find the reflection point on the mirror surface Q
|
Ray geometry in non-pinhole cameras: a survey <s> Centric catadioptric cameras <s> Conventional video cameras have limited fields of view that make them restrictive in a variety of vision applications. There are several ways to enhance the field of view of an imaging system. However, the entire imaging system must have a single effective viewpoint to enable the generation of pure perspective images from a sensed image. A new camera with a hemispherical field of view is presented. Two such cameras can be placed back-to-back, without violating the single viewpoint constraint, to arrive at a truly omnidirectional sensor. Results are presented on the software generation of pure perspective images from an omnidirectional image, given any user-selected viewing direction and magnification. The paper concludes with a discussion on the spatial resolution of the proposed camera. <s> BIB001 </s> Ray geometry in non-pinhole cameras: a survey <s> Centric catadioptric cameras <s> Conventional video cameras have limited fields of view which make them restrictive for certain applications in computational vision. A catadioptric sensor uses a combination of lenses and mirrors placed in a carefully arranged configuration to capture a much wider field of view. One important design goal for catadioptric sensors is choosing the shapes of the mirrors in a way that ensures that the complete catadioptric system has a single effective viewpoint. The reason a single viewpoint is so desirable is that it is a requirement for the generation of pure perspective images from the sensed images. In this paper, we derive the complete class of single-lens single-mirror catadioptric sensors that have a single viewpoint. We describe all of the solutions in detail, including the degenerate ones, with reference to many of the catadioptric systems that have been proposed in the literature. In addition, we derive a simple expression for the spatial resolution of a catadioptric sensor in terms of the resolution of the cameras used to construct it. Moreover, we include detailed analysis of the defocus blur caused by the use of a curved mirror in a catadioptric sensor. <s> BIB002
|
The simplest catadioptric cameras are designed to maintain a single viewpoint, i.e., all the projection rays intersect at one common point (the effective viewpoint), in order to generate perspectively correct images from sections of the acquired image. Such systems are commonly referred to centric catadioptric cameras. Since all projection rays from scene points form a same pinhole camera about the effective viewpoint before reflection, we can easily resolve the forward projection problem by projecting the 3D point in the virtual pinhole camera. Nayar and Baker BIB002 BIB001 analyzed all possible classes of centric catadioptric systems. They derived a fixed viewpoint constraint that requires all projection rays passing through the effective pinhole of the camera (after reflection) would have passed through the effective viewpoint before reflected by the mirror surface. Since the mirror is rotationally symmetric, one can then consider this problem in 2D by taking a slice across the central axis. Assuming that the effective viewpoint is at origin [0, 0]; the effective pinhole is at [0, c] and the mirror surface is of form z(r) = z(x, y), where r = x 2 + y 2 , the constraint can then be written as a quadratic first-order ordinary differential equation: The solution to Eq. (29) reveals that only 3D mirrors swept by conic sections around its central axis can satisfy the fixed viewpoint constraint, therefore, maintaining a single viewpoint. They have further shown two practical setups of centric catadioptric cameras: (1) positioning a pinhole camera at the focal point of a hyperboloidal mirror; and (2) orienting an orthographic camera (realized by using a tele-lens) towards the rotational axis of a paraboloidal mirror. Both designs, however, require highly accurate alignment and precise assembly of the optical components.
|
Ray geometry in non-pinhole cameras: a survey <s> Non-centric catadioptric cameras <s> Conventional vision systems and algorithms assume the camera to have a single viewpoint. However, sensors need not always maintain a single viewpoint. For instance, an incorrectly aligned system could cause non-single viewpoints. Also, systems could be designed to specifically deviate from a single viewpoint to trade-off image characteristics such as resolution and field of view. In these cases, the locus of viewpoints forms what is called a caustic. In this paper, we present an in-depth analysis of caustics of catadioptric cameras with conic reflectors. Properties of caustics with respect to field of view and resolution are presented. Finally, we present ways to calibrate conic catadioptric systems and estimate their caustics from known camera motion. <s> BIB001 </s> Ray geometry in non-pinhole cameras: a survey <s> Non-centric catadioptric cameras <s> Abstract.Conventional vision systems and algorithms assume the imaging system to have a single viewpoint. However, these imaging systems need not always maintain a single viewpoint. For instance, an incorrectly aligned catadioptric system could cause non-single viewpoints. Moreover, a lot of flexibility in imaging system design can be achieved by relaxing the need for imaging systems to have a single viewpoint. Thus, imaging systems with non-single viewpoints can be designed for specific imaging tasks, or image characteristics such as field of view and resolution. The viewpoint locus of such imaging systems is called a caustic.In this paper, we present an in-depth analysis of caustics of catadioptric cameras with conic reflectors. We use a simple parametric model for both, the reflector and the imaging system, to derive an analytic solution for the caustic surface. This model completely describes the imaging system and provides a map from pixels in the image to their corresponding viewpoints and viewing direction. We use the model to analyze the imaging system's properties such as field of view, resolution and other geometric properties of the caustic itself. In addition, we present a simple technique to calibrate the class of conic catadioptric cameras and estimate their caustics from known camera motion. The analysis and results we present in this paper are general and can be applied to any catadioptric imaging system whose reflector has a parametric form. <s> BIB002 </s> Ray geometry in non-pinhole cameras: a survey <s> Non-centric catadioptric cameras <s> We present a novel method for analyzing reflections on arbitrary surfaces. We model reflections using a broader than usual class of imaging models, which include both perspective and multiperspective camera types. We provide an analytical framework to locally model reflections as specific multiperspective cameras around every ray based on a new theory of general linear cameras. Our framework better characterizes the complicated image distortions seen on irregular mirror surfaces as well as the conventional catadioptric mirrors. We show the connection between multiperspective camera models and caustic surfaces of reflections and demonstrate how they reveal important surface rulings of the caustics. Finally, we show how to use our analysis to assist mirror design and characterize distortions seen in catadioptric imaging systems. <s> BIB003
|
Relaxing the single viewpoint constraint allows more general but non-centric catadioptric cameras. In a non-centric catadioptric camera, the loci of virtual viewpoints form the caustic surfaces of the mirror. The centric catadioptric camera is a special case with its caustic being a point. Swaminathan et al. BIB001 BIB002 proposed to use the envelop of these reflection rays for computing the caustic surface. Yu and McMillan BIB003 instead decompose the mirror surface into piecewise triangle patches and model each reflection patch as a GLC, as shown in Sect. 3.2.2. Recall that local reflection ray geometry observed by a pinhole or an orthographic camera can only be one of the four types of GLC: XSlit, pushbroom, pinhole, or orthographic, all can be viewed as special cases of XSlit cameras: when the two slits intersect, it transforms into a pinhole camera; when one of the slits goes to infinity, the XSlit transforms into a pushbroom; and when both slits go to infinity, it transforms into an orthographic camera.
|
Ray geometry in non-pinhole cameras: a survey <s> Axial cameras <s> A theory of stereo image formation is presented that enables a complete classification of all possible stereo views, including non-perspective varieties. Towards this end, the notion of epipolar geometry is generalized to apply to multiperspective images. It is shown that any stereo pair must consist of rays lying on one of three varieties of quadric surfaces. A unified representation is developed to model all classes of stereo views, based on the concept of a quadric view. The benefits include a unified treatment of projection and triangulation operations for all stereo views. The framework is applied to derive new types of stereo image representations with unusual and useful properties. Experimental examples of these images are constructed and used to obtain 3D binocular object reconstructions. <s> BIB001 </s> Ray geometry in non-pinhole cameras: a survey <s> Axial cameras <s> Although most works in computer vision use perspective or other central cameras, the interest in non-central camera models has increased lately, especially with respect to omnidirectional vision. Calibration and structure-from-motion algorithms exist for both, central and non-central cameras. An intermediate class of cameras, although encountered rather frequently, has received less attention. So-called axial cameras are non-central but their projection rays are constrained by the existence of a line that cuts all of them. This is the case for stereo systems, many non-central catadioptric cameras and pushbroom cameras for example. In this paper, we study the geometry of axial cameras and propose a calibration approach for them. We also describe the various axial catadioptric configurations which are more common and less restrictive than central catadioptric ones. Finally we used simulations and real experiments to prove the validity of our theory. <s> BIB002 </s> Ray geometry in non-pinhole cameras: a survey <s> Axial cameras <s> In this paper, we present a class of imaging systems, called radial imaging systems, that capture a scene from a large number of view-points within a single image, using a camera and a curved mirror. These systems can recover scene properties such as geometry, reflectance, and texture. We derive analytic expressions that describe the properties of a complete family of radial imaging systems, including their loci of viewpoints, fields of view, and resolution characteristics. We have built radial imaging systems that, from a single image, recover the frontal 3D structure of an object, generate the complete texture map of a convex object, and estimate the parameters of an analytic BRDF model for an isotropic material. In addition, one of our systems can recover the complete geometry of a convex object by capturing only two images. These results show that radial imaging systems are simple, effective, and convenient devices for a wide range of applications in computer graphics and computer vision. <s> BIB003 </s> Ray geometry in non-pinhole cameras: a survey <s> Axial cameras <s> Catadioptric imaging systems are commonly used for wide-angle imaging, but lead to multi-perspective images which do not allow algorithms designed for perspective cameras to be used. Efficient use of such systems requires accurate geometric ray modeling as well as fast algorithms. We present accurate geometric modeling of the multi-perspective photo captured with a spherical catadioptric imaging system using axial-cone cameras: multiple perspective cameras lying on an axis each with a different viewpoint and a different cone of rays. This modeling avoids geometric approximations and allows several algorithms developed for perspective cameras to be applied to multi-perspective catadioptric cameras. ::: We demonstrate axial-cone modeling in the context of rendering wide-angle light fields, captured using a spherical mirror array. We present several applications such as spherical distortion correction, digital refocusing for artistic depth of field effects in wide-angle scenes, and wide-angle dense depth estimation. Our GPU implementation using axial-cone modeling achieves up to three orders of magnitude speed up over ray tracing for these applications. <s> BIB004
|
The forward projection problem can also be addressed using special catadioptric cameras such as the axial camera. The axial camera is an intermediate class of cameras that lies between centric and non-centric ones. In an axial camera, all the projection rays are constrained to pass through a common axis but not a 3D point. One such model is a rotationally symmetric mirror with a pinhole camera viewing from its rotation axis, as shown in Fig. 16(a) . Axial cameras are easier to construct than the centric catadioptric ones. For example, in a centric hyperbolic catadioptric camera, the optical center of the view camera has to be placed precisely at the mirror's foci whereas in an axial camera the optical center can be placed anywhere on the mirror axis to satisfy the axial geometry. The fact that all reflection rays passing through the rotation axis reveals that local GLCs will map all reflection patches to a group of XSlit cameras that share a common slit, i.e., the rotation axis. Ramalingam et al. BIB002 proposed a generic calibration algorithm for axial cameras by computing projection rays for each pixel constrained by the mirror axis. Agrawal et al. further provided an analytical solution for forward projection for axial cameras. Given the viewpoint and a mirror, they compute the light path from a scene point to the viewing camera by solving a closed-form high-order forward projection equation. Conceptually, this can be done by exhaustively computing the projection for each centric ring of the virtual camera. For spherical mirror, they derived that the projection equation is reduced to 4th degree. This closedform solution can be used to effectively compute the epipolar geometry to accelerate catadioptric stereo matching and to compose multiple axial camera images for forming a perspective one BIB004 . Another special class of axial cameras is the radial camera proposed by Kuthirummal and Nayar BIB003 . Their goal is to strategically capture the scene from multiple viewpoints within a single image. A radial camera consists of a conventional camera looking through a hollow rotationally symmetric mirror polished on the inside, as shown in Fig. 16(b) . The FoV of the camera is folded inwards and consequently the scene is captured both directly and from virtual viewpoints after reflection by the mirror, as shown in Fig. 16(c) . By using a single camera, the radiometric properties are the same across all views. Therefore, no synchronization or calibration is required. The radial imaging system can also be viewed as a special axial camera that has a circular locus of virtual viewpoints. Similar to the regular axial camera, closed-form solution can be derived for computing the forward projection. Further, this camera has the same epipolar geometry as the cyclographs BIB001 and therefore can be effectively used for omni-directional 3D reconstruction, acquiring 3D textures, sampling and estimating the surface reflectance properties such as the Bidirectional Reflectance Distribution Functions (BRDF).
|
Ray geometry in non-pinhole cameras: a survey <s> Catadioptric projectors <s> What are the elements of early vision? This question might be taken to mean, What are the fundamental atoms of vision?—and might be variously answered in terms ofsuch candidate structures as edges, peaks, corners, and so on. In this chapter we adopt a rather different point of view and ask the question, What are the fundamentalsubstances of vision? This distinction is important becausewe wish to focus on the first steps in extraction of visualinformation. At this level it is premature to talk aboutdiscrete objects, even such simple ones as edges and corners.There is general agreement that early vision involvesmeasurements of a number of basic image properties in-cluding orientation, color, motion, and so on. Figure l.lshows a caricature (in the style of Neisser, 1976), of the sort of architecture that has become quite popular as a model for both human and machine vision. The first stageof processing involves a set of parallel pathways, eachdevoted to one particular-visual property. We propose that the measurements of these basic properties be con-sidered as the elements of early vision. We think of earlyvision as measuring the amounts of various kinds of vi-sual "substances" present in the image (e.g., redness orrightward motion energy). In other words, we are inter- ested in how early vision measures “stuff” rather than in how it labels “things.”What, then, are these elementary visual substances?Various lists have been compiled using a mixture of intui-tion and experiment. Electrophysiologists have describedneurons in striate cortex that are selectively sensitive tocertain visual properties; for reviews, see Hubel (1988) and DeValois and DeValois (1988). Psychophysicists haveinferred the existence of channels that are tuned for cer- tain visual properties; for reviews, see Graham (1989), Olzak and Thomas (1986), Pokorny and Smith (1986), and Watson (1986). Researchers in perception have foundaspects of visual stimuli that are processed pre-attentive- ly (Beck, 1966; Bergen & Julesz, 1983; Julesz & Bergen, <s> BIB001 </s> Ray geometry in non-pinhole cameras: a survey <s> Catadioptric projectors <s> Ordinary cameras gather light across the area of their lens aperture, and the light striking a given subregion of the aperture is structured somewhat differently than the light striking an adjacent subregion. By analyzing this optical structure, one can infer the depths of the objects in the scene, i.e. one can achieve single lens stereo. The authors describe a camera for performing this analysis. It incorporates a single main lens along with a lenticular array placed at the sensor plane. The resulting plenoptic camera provides information about how the scene would look when viewed from a continuum of possible viewpoints bounded by the main lens aperture. Deriving depth information is simpler than in a binocular stereo system because the correspondence problem is minimized. The camera extracts information about both horizontal and vertical parallax, which improves the reliability of the depth estimates. > <s> BIB002 </s> Ray geometry in non-pinhole cameras: a survey <s> Catadioptric projectors <s> A number of techniques have been proposed for flying through scenes by redisplaying previously rendered or digitized views. Techniques have also been proposed for interpolating between views by warping input images, using depth information or correspondences between multiple images. In this paper, we describe a simple and robust method for generating new views from arbitrary camera positions without depth information or feature matching, simply by combining and resampling the available images. The key to this technique lies in interpreting the input images as 2D slices of a 4D function the light field. This function completely characterizes the flow of light through unobstructed space in a static scene with fixed illumination. We describe a sampled representation for light fields that allows for both efficient creation and display of inward and outward looking views. We hav e created light fields from large arrays of both rendered and digitized images. The latter are acquired using a video camera mounted on a computer-controlled gantry. Once a light field has been created, new views may be constructed in real time by extracting slices in appropriate directions. Since the success of the method depends on having a high sample rate, we describe a compression system that is able to compress the light fields we have generated by more than a factor of 100:1 with very little loss of fidelity. We also address the issues of antialiasing during creation, and resampling during slice extraction. CR Categories: I.3.2 [Computer Graphics]: Picture/Image Generation — Digitizing and scanning, Viewing algorithms; I.4.2 [Computer Graphics]: Compression — Approximate methods Additional keywords: image-based rendering, light field, holographic stereogram, vector quantization, epipolar analysis <s> BIB003 </s> Ray geometry in non-pinhole cameras: a survey <s> Catadioptric projectors <s> This research further develops the light field and lumigraph image-based rendering methods and extends their utility. We present alternate parameterizations that permit 1) interactive rendering of moderately sampled light fields of scenes with significant, unknown depth variation and 2) low-cost, passive autostereoscopic viewing. Using a dynamic reparameterization, these techniques can be used to interactively render photographic effects such as variable focus and depth-of-field within a light field. The dynamic parameterization is independent of scene geometry and does not require actual or approximate geometry of the scene. We explore the frequency domain and ray-space aspects of dynamic reparameterization, and present an interactive rendering technique that takes advantage of today's commodity rendering hardware. <s> BIB004 </s> Ray geometry in non-pinhole cameras: a survey <s> Catadioptric projectors <s> This paper presents a process for capturing spatially and directionally varying illumination from a real-world scene and using this lighting to illuminate computer-generated objects. We use two devices for capturing such illumination. In the first we photograph an array of mirrored spheres in high dynamic range to capture the spatially varying illumination. In the second, we obtain higher resolution data by capturing images with an high dynamic range omnidirectional camera as it traverses across a plane. For both methods we apply the light field technique to extrapolate the incident illumination to a volume. We render computer-generated objects as illuminated by this captured illumination using a custom shader within an existing global illumination rendering system. To demonstrate our technique we capture several spatially-varying lighting environments with spotlights, shadows, and dappled lighting and use them to illuminate synthetic scenes. We also show comparisons to real objects under the same illumination. <s> BIB005 </s> Ray geometry in non-pinhole cameras: a survey <s> Catadioptric projectors <s> This paper contributes to the theory of photograph formation from light fields. The main result is a theorem that, in the Fourier domain, a photograph formed by a full lens aperture is a 2D slice in the 4D light field. Photographs focused at different depths correspond to slices at different trajectories in the 4D space. The paper demonstrates the utility of this theorem in two different ways. First, the theorem is used to analyze the performance of digital refocusing, where one computes photographs focused at different depths from a single light field. The analysis shows in closed form that the sharpness of refocused photographs increases linearly with directional resolution. Second, the theorem yields a Fourier-domain algorithm for digital refocusing, where we extract the appropriate 2D slice of the light field's Fourier transform, and perform an inverse 2D Fourier transform. This method is faster than previous approaches. <s> BIB006 </s> Ray geometry in non-pinhole cameras: a survey <s> Catadioptric projectors <s> The advent of inexpensive digital image sensors and the ability to create photographs that combine information from a number of sensed images are changing the way we think about photography. In this paper, we describe a unique array of 100 custom video cameras that we have built, and we summarize our experiences using this array in a range of imaging applications. Our goal was to explore the capabilities of a system that would be inexpensive to produce in the future. With this in mind, we used simple cameras, lenses, and mountings, and we assumed that processing large numbers of images would eventually be easy and cheap. The applications we have explored include approximating a conventional single center of projection video camera with high performance along one or more axes, such as resolution, dynamic range, frame rate, and/or large aperture, and using multiple cameras to approximate a video camera with a large synthetic aperture. This permits us to capture a video light field, to which we can apply spatiotemporal view interpolation algorithms in order to digitally simulate time dilation and camera motion. It also permits us to create video sequences using custom non-uniform synthetic apertures. <s> BIB007 </s> Ray geometry in non-pinhole cameras: a survey <s> Catadioptric projectors <s> We describe a theoretical framework for reversibly modulating 4D light fields using an attenuating mask in the optical path of a lens based camera. Based on this framework, we present a novel design to reconstruct the 4D light field from a 2D camera image without any additional refractive elements as required by previous light field cameras. The patterned mask attenuates light rays inside the camera instead of bending them, and the attenuation recoverably encodes the rays on the 2D sensor. Our mask-equipped camera focuses just as a traditional camera to capture conventional 2D photos at full sensor resolution, but the raw pixel values also hold a modulated 4D light field. The light field can be recovered by rearranging the tiles of the 2D Fourier transform of sensor values into 4D planes, and computing the inverse Fourier transform. In addition, one can also recover the full resolution image information for the in-focus parts of the scene. ::: We also show how a broadband mask placed at the lens enables us to compute refocused images at full sensor resolution for layered Lambertian scenes. This partial encoding of 4D ray-space data enables editing of image contents by depth, yet does not require computational recovery of the complete 4D light field. <s> BIB008 </s> Ray geometry in non-pinhole cameras: a survey <s> Catadioptric projectors <s> We present a catadioptric projector analogous to a catadioptric camera by combining a commodity digital projector with additional optical units. We show that, by using specially shaped reflectors/refractors, catadioptric projectors can offer an unprecedented level of flexibility in aspect ratio, size, and field of view. We also present efficient algorithms to reduce projection artifacts in catadioptric projectors, such as distortions, scattering, and defocusing. Instead of recovering the reflector/refractor geometry, our approach directly models the light transport between the projector and the viewpoint using the light transport matrix (LTM). We show how to efficiently approximate the pseudo inverse of the LTM and apply it to find the optimal input image that produces least projection distortions. Furthermore, we present a projection defocus analysis for reflector and thin refractor based catadioptric projectors. We show that defocus blur can be interpreted as spatially-varying Gaussian blurs on the input image. We then measure the kernels directly from the LTM and apply deconvolution to optimize the input image. We demonstrate the practical uses of catadioptric projectors in panoramic and omni-directional projections. Our new system achieves much wider field-of-view projection while maintaining sharpness and low geometric and photometric distortions. <s> BIB009 </s> Ray geometry in non-pinhole cameras: a survey <s> Catadioptric projectors <s> Plenoptic cameras, constructed with internal microlens arrays, focus those microlenses at infinity in order to sample the 4D radiance directly at the microlenses. The consequent assumption is that each microlens image is completely defocused with respect to to the image created by the main camera lens and the outside object. As a result, only a single pixel in the final image can be rendered from it, resulting in disappointingly low resolution. In this paper, we present a new approach to lightfield capture and image rendering that interprets the microlens array as an imaging system focused on the focal plane of the main camera lens. This approach captures a lightfield with significantly higher spatial resolution than the traditional approach, allowing us to render high resolution images that meet the expectations of modern photographers. Although the new approach samples the lightfield with reduced angular density, analysis and experimental results demonstrate that there is sufficient parallax to completely support lightfield manipulation algorithms such as refocusing and novel views <s> BIB010 </s> Ray geometry in non-pinhole cameras: a survey <s> Catadioptric projectors <s> Stereo matching and volumetric reconstruction are the most explored 3D scene recovery techniques in computer vision. Many existing approaches assume perspective input images and use the epipolar constraint to reduce the search space and improve the accuracy. In this paper we present a novel framework that uses multi-perspective cameras for stereo matching and volumetric reconstruction. Our approach first decomposes a multi-perspective camera into piecewise primitive General Linear Cameras or GLCs [32]. A pair of GLCs in general do not satisfy the epipolar constraint. However, they still form a nearly stereo pair. We develop a new Graph-Cut-based algorithm to account for the slight vertical parallax using the GLC ray geometry. We show that the recovered pseudo disparity map conveys important depth cues analogous to perspective stereo matching. To more accurately reconstruct a 3D scene, we develop a new multi-perspective volumetric reconstruction method. We discretize the scene into voxels and apply the GLC back-projections to map the voxel onto each input multi-perspective camera. Finally, we apply the graph-cut algorithm to optimize the 3D embedded voxel graph. We demonstrate our algorithms on both synthetic and real multi-perspective cameras. Experimental results show that our methods are robust and reliable. <s> BIB011 </s> Ray geometry in non-pinhole cameras: a survey <s> Catadioptric projectors <s> We show a new camera based interaction solution where an ordinary camera can detect small optical tags from a relatively large distance. Current optical tags, such as barcodes, must be read within a short range and the codes occupy valuable physical space on products. We present a new low-cost optical design so that the tags can be shrunk to 3mm visible diameter, and unmodified ordinary cameras several meters away can be set up to decode the identity plus the relative distance and angle. The design exploits the bokeh effect of ordinary cameras lenses, which maps rays exiting from an out of focus scene point into a disk like blur on the camera sensor. This bokeh-code or Bokode is a barcode design with a simple lenslet over the pattern. We show that a code with 15μm features can be read using an off-the-shelf camera from distances of up to 2 meters. We use intelligent binary coding to estimate the relative distance and angle to the camera, and show potential for applications in augmented reality and motion capture. We analyze the constraints and performance of the optical system, and discuss several plausible application scenarios. <s> BIB012 </s> Ray geometry in non-pinhole cameras: a survey <s> Catadioptric projectors <s> Catadioptric imaging systems are commonly used for wide-angle imaging, but lead to multi-perspective images which do not allow algorithms designed for perspective cameras to be used. Efficient use of such systems requires accurate geometric ray modeling as well as fast algorithms. We present accurate geometric modeling of the multi-perspective photo captured with a spherical catadioptric imaging system using axial-cone cameras: multiple perspective cameras lying on an axis each with a different viewpoint and a different cone of rays. This modeling avoids geometric approximations and allows several algorithms developed for perspective cameras to be applied to multi-perspective catadioptric cameras. ::: We demonstrate axial-cone modeling in the context of rendering wide-angle light fields, captured using a spherical mirror array. We present several applications such as spherical distortion correction, digital refocusing for artistic depth of field effects in wide-angle scenes, and wide-angle dense depth estimation. Our GPU implementation using axial-cone modeling achieves up to three orders of magnitude speed up over ray tracing for these applications. <s> BIB013 </s> Ray geometry in non-pinhole cameras: a survey <s> Catadioptric projectors <s> We introduce a new approach to capturing refraction in transparent media, which we call Light Field Background Oriented Schlieren Photography (LFBOS). By optically coding the locations and directions of light rays emerging from a light field probe, we can capture changes of the refractive index field between the probe and a camera or an observer. Rather than using complicated and expensive optical setups as in traditional Schlieren photography we employ commodity hardware; our prototype consists of a camera and a lenslet array. By carefully encoding the color and intensity variations of a 4D probe instead of a diffuse 2D background, we avoid expensive computational processing of the captured data, which is necessary for Background Oriented Schlieren imaging (BOS). We analyze the benefits and limitations of our approach and discuss application scenarios. <s> BIB014 </s> Ray geometry in non-pinhole cameras: a survey <s> Catadioptric projectors <s> Acquiring transparent, refractive objects is challenging as these kinds of objects can only be observed by analyzing the distortion of reference background patterns. We present a new, single image approach to reconstructing thin transparent surfaces, such as thin solids or surfaces of fluids. Our method is based on observing the distortion of light field background illumination. Light field probes have the potential to encode up to four dimensions in varying colors and intensities: spatial and angular variation on the probe surface; commonly employed reference patterns are only two-dimensional by coding either position or angle on the probe. We show that the additional information can be used to reconstruct refractive surface normals and a sparse set of control points from a single photograph. <s> BIB015 </s> Ray geometry in non-pinhole cameras: a survey <s> Catadioptric projectors <s> We present a novel and simple computational imaging solution to robustly and accurately recover 3D dynamic fluid surfaces. Traditional specular surface reconstruction schemes place special patterns (checkerboard or color patterns) beneath the fluid surface to establish point-pixel correspondences. However, point-pixel correspondences alone are insufficient to recover surface normal or height and they rely on additional constraints to resolve the ambiguity. In this paper, we exploit using Bokode — a computational optical device that emulates a pinhole projector — for capturing ray-ray correspondences which can then be used to directly recover the surface normals. We further develop a robust feature matching algorithm based on the Active-Appearance Model to robustly establishing ray-ray correspondences. Our solution results in an angularly sampled normal field and we derive a new angular-domain surface integration scheme to recover the surface from the normal fields. Specifically, we reformulate the problem as an over-constrained linear system under spherical coordinate and solve it using Singular Value Decomposition. Experiments results on real and synthetic surfaces demonstrate that our approach is robust and accurate, and is easier to implement than state-of-the-art multi-camera based approaches. <s> BIB016
|
Finally, one can replace the viewing pinhole camera with a projector. Ding et al. BIB009 proposed the catadioptric projector by combining a digital commodity projector with specially shaped reflectors to achieve an unprecedented level of flexibility in aspect ratio, size, and FoV, as shown in Fig. 17 . Their system assumes unknown reflector geometry and does not require accurate alignment between the projector and the optical units. They then use the inverse light transport technique to correct geometric distortions and scattering. The main difference between the catadioptric camera and catadioptric projector is that the camera uses a near-zero aperture whereas the projector requires a wide aperture to achieve bright projections. However, the wide aperture may cause severe defocus blurs. Due to the non-pinhole nature of reflection rays, the defocus blurs are much more complicated, e.g., the blur kernels are spatial-varying and noncircular shaped. Therefore, traditional image preconditioning algorithms are not directly applicable. The analysis in Sect. 4.3 shows that the catadioptric defocus blur can range from an ellipse to a line segment, depending on the aperture setting and the projector focal depth. To compensate for defocus blurs, Ding et al. BIB009 adopt a hardware solution: they change the shape of the aperture to reduce the average size of the defocus blur kernel. Conceptually, one can use a very small aperture to emulate pinhole-type projection. However, small apertures block a large amount of light and produce dark projections. Their solution is then to find the appropriate aperture shape that can effectively reduce the blurs without sacrificing the brightness in projection. In their approach, they first estimate the blur kernel by projecting a dotted pattern onto the wall and fit an ellipse to each captured dot. They then compute the average major and minor radii across all dots as a and b . Using the analysis in Sect. 4.3, they prove that the major and The most general non-pinhole should be able to sample the complete 4D ray space and then reconfigure the rays at will. This requires using the generalized optics that treats each optical element as a 4D ray-bender that modifies the rays in a light field BIB002 BIB010 BIB006 . The collected ray bundles can then be regrouped into separate measurements of the plenoptic function BIB001 . The most straightforward scheme is to move a camera along a 2D path to sample the 4D ray space BIB004 BIB003 . Although this method is simple and easy to implement, it is only suitable for acquiring static scenes. Wilburn et al. BIB007 instead built a camera array to capture the light field. Constructing such a light field camera array, however, is extremely time and effort consuming and requires substantial amount of engineering. The latest developments are the light field cameras. Lenslet based light field camera Recent advances in optics manufacturing has enabled the light field to be captured using a single camera in one shot. Ng BIB006 designed a handheld plenoptic camera to record the light field within a single shot by placing a lenslet array in front of the camera sensor to separate converging rays. Each microlens focuses at the main aperture plane. Since the size of the main lens is several magnitude larger than the lenslet, it can be treated as infinity to the lenslet. The sensor is placed at the focal plane of the lenslet array for simplification. In Ng's design, the Fnumbers of the main lens and each microlens are matched to avoid cross-talk among microlens images. By parameterizing the in-lens light field with a 2PP of Π uv at the main aperture and Π st at the lenslet array, the acquired ray space is uniformly sampled. This design has led to the commercial light field camera, Lytro [25] , as shown in Fig. 18(a) . Lumsdaine et al. BIB010 introduced a slightly different design by focusing the lenslet array on a virtual plane inside camera. In this case each microlens image will capture more spatial samples but less angular samples on the focused virtual plane. This design is capable of producing higher resolution results when focusing near the sampled image plane. However, the lower angular resolution leads to more severe ringing artifacts at the out-of-focus regions, as shown in Fig. 18(b) . Mask based light field camera Instead of using a lenslet array to separate light arriving at the same pixel from different directions, Veeraraghavan et al. BIB008 used a non-refractive patterned attenuation mask to modulate the light field in the frequency domain. By placing the mask on the light path between the lens and sensor, it attenuates light from differ-ent directions accordingly, as shown in Fig. 18(c) . Considering the process in the frequency domain, we can view it as heterodyning the incoming light field. The attenuation mask needs to be reversible to ensure that demodulation can be performed. To recover the light field, they first transform the captured 2D image into Fourier domain and then rearrange the tiles of the 2D Fourier transform into 4D space. Finally, the light field of the scene is computed by taking the inverse 4D Fourier transform. Further, they can insert the mask at different location along the optical path of the camera to achieve dynamic frequency modulation. However, the mask partially blocks out the incoming light and greatly reduces light efficiency. Mirror based light field camera It is also possible to acquire the light field using a catadioptric mirror array, as shown in Fig. 18(d) . Unger et al. BIB005 combined a high resolution tele-lens camera and an array of spherical mirrors to capture the incident light field. The use of mirror arrays instead of lenslet arrays has its advantages: it avoids chromatic aberrations caused by refraction, it does not require elaborate calibration between the lenslet array and the sensor, it captures images at a wide FoV, and it is less expensive and reconfigurable. The disadvantages are two-fold. First, each mirror image is non-pinhole and therefore requires conducting forward projection for associating the reflection rays with 3D points. Second, the sampled the light field is nonuniform. Two notable examples of these systems are the spherical mirror arrays by Ding et al. BIB011 and Taguchi et al. BIB013 . In BIB011 , the authors applied the GLC-based forward projection (Sect. 6.2) on multi-view space carving for reconstructing the 3D scene. Taguchi et al. BIB013 developed both a mirror array and a refractive sphere array and applied the axial camera formulation (Sect. 6.2) to compute the closed forward projection. They have shown various applications including distortion correction and light field rendering. Light field probes Analogous to catadioptric cameras vs. catadioptric projectors, the duality to the light field camera is the light field probe, i.e., replacing the sensor with a projector. The real light field probe has been implemented by using a backlight, diffuser, pattern, and a single or array of lenslet, as shown in Fig. 19 . Similar to Lytro, the pattern is placed at the focal plane of the lenslet array to simulate an array of projectors projecting towards infinity. The light field probe is apparently a multi-view display. It is also particularly useful for acquiring transparent surfaces. Notice that the light field probe can directly estimate ray-ray correspondences as the view camera can associates each pixel to a ray. Ye et al. BIB016 used a single lens probe (a Bokode BIB012 ) for recovering dynamic fluid surfaces. They presented a robust feature matching algorithm based on the Active Appearance Model (AAM) to robustly establishing ray-ray correspondences. The ray-ray correspondences then directly provide the surface normal and they derive a new angular-domain surface integration scheme to recover the surface from the normal field. Wetzstein et al. BIB014 BIB015 also used the light field probe for reconstructing complex transparent objects. They encode both spatial and angular by using specially designed color pattern. Specifically, they use gradients of different color channels (red and blue) to encode the 2D incident ray direction and the green channel to encode the 1D (vertical) spatial location of the pattern. The second (horizontal) spatial location can be recovered through geometric constraints. Their approach is able to achieve highly accurate ray-ray correspondences for reconstructing surface normals of complex static objects.
|
Recommendation Systems for Tourism Based on Social Networks: A Survey <s> Techniques for recommender systems <s> A passive, integrating electromagnetic radiation power dosimeter. A radiofrequency or microwave antenna is combined with a diode detector/rectifier, a squaring circuit, and a electrochemical storage cell to provide an apparatus for determining the average energy of electromagnetic radiation incident on a surface. After a particular period of irradiation, the dosimeter can be interrogated electrically or visually, depending on the type of electrochemical cell employed, to yield the desired information. The apparatus has a substantially linear response to the electromagnetic power density over a wide range of electromagnetic field, and all of the energy required to record the incident energy is supplied by the electromagnetic field. <s> BIB001 </s> Recommendation Systems for Tourism Based on Social Networks: A Survey <s> Techniques for recommender systems <s> When humans talk with humans, they are able to use implicit situational information, or context, to increase the conversational bandwidth. Unfortunately, this ability to convey ideas does not transfer well to humans interacting with computers. In traditional interactive computing, users have an impoverished mechanism for providing input to computers. By improving the computer’s access to context, we increase the richness of communication in human-computer interaction and make it possible to produce more useful computational services. The use of context is increasingly important in the fields of handheld and ubiquitous computing, where the user?s context is changing rapidly. In this panel, we want to discuss some of the research challenges in understanding context and in developing context-aware applications. <s> BIB002 </s> Recommendation Systems for Tourism Based on Social Networks: A Survey <s> Techniques for recommender systems <s> We discuss algorithms for learning and revising user profiles that can determine which World Wide Web sites on a given topic would be interesting to a user. We describe the use of a naive Bayesian classifier for this task, and demonstrate that it can incrementally learn profiles from user feedback on the interestingness of Web sites. Furthermore, the Bayesian classifier may easily be extended to revise user provided profiles. In an experimental evaluation we compare the Bayesian classifier to computationally more intensive alternatives, and show that it performs at least as well as these approaches throughout a range of different domains. In addition, we empirically analyze the effects of providing the classifier with background knowledge in form of user defined profiles and examine the use of lexical knowledge for feature selection. We find that both approaches can substantially increase the prediction accuracy. <s> BIB003 </s> Recommendation Systems for Tourism Based on Social Networks: A Survey <s> Techniques for recommender systems <s> We present a framework for adaptive news access, based on machine learning techniques specifically designed for this task. First, we focus on the system's general functionality and system architecture. We then describe the interface and design of two deployed news agents that are part of the described architecture. While the first agent provides personalized news through a web-based interface, the second system is geared towards wireless information devices such as PDAs (personal digital assistants) and cell phones. Based on implicit and explicit user feedback, our agents use a machine learning algorithm to induce individual user models. Motivated by general shortcomings of other user modeling systems for Information Retrieval applications, as well as the specific requirements of news classification, we propose the induction of hybrid user models that consist of separate models for short-term and long-term interests. Furthermore, we illustrate how the described algorithm can be used to address an important issue that has thus far received little attention in the Information Retrieval community: a user's information need changes as a direct result of interaction with information. We empirically evaluate the system's performance based on data collected from regular system users. The goal of the evaluation is not only to understand the performance contributions of the algorithm's individual components, but also to assess the overall utility of the proposed user modeling techniques from a user perspective. Our results provide empirical evidence for the utility of the hybrid user model, and suggest that effective personalization can be achieved without requiring any extra effort from the user. <s> BIB004 </s> Recommendation Systems for Tourism Based on Social Networks: A Survey <s> Techniques for recommender systems <s> We discuss learning a profile of user interests for recommending information sources such as Web pages or news articles. We describe the types of information available to determine whether to recommend a particular page to a particular user. This information includes the content of the page, the ratings of the user on other pages and the contents of these pages, the ratings given to that page by other users and the ratings of these other users on other pages and demographic information about users. We describe how each type of information may be used individually and then discuss an approach to combining recommendations from multiple sources. We illustrate each approach and the combined approach in the context of recommending restaurants. <s> BIB005 </s> Recommendation Systems for Tourism Based on Social Networks: A Survey <s> Techniques for recommender systems <s> One of the potent personalization technologies powering the adaptive web is collaborative filtering. Collaborative filtering (CF) is the process of filtering or evaluating items through the opinions of other people. CF technology brings together the opinions of large interconnected communities on the web, supporting filtering of substantial quantities of data. In this chapter we introduce the core concepts of collaborative filtering, its primary uses for users of the adaptive web, the theory and practice of CF algorithms, and design decisions regarding rating systems and acquisition of ratings. We also discuss how to evaluate CF systems, and the evolution of rich interaction interfaces. We close the chapter with discussions of the challenges of privacy particular to a CF recommendation service and important open research questions in the field. <s> BIB006
|
RSs are software tools and techniques that provide suggestions of items that are most likely of interest to a particular user . Studies about recommendations, suggestions or content filtering for the tourism sector are not that new. In 1986 , (Michie, 1986 proposed that travellers construct their preferences for alternative destinations from their awareness and effectiveness; in 1989, ) proposed a path model of direct and indirect relationships leading to destination choice. In the mid-1990's, presented a framework of routes selection in Prince Edward Island region (Canada). The authors developed propositions suitable for empirical testing by using eight leisure traveller choice subsystems: destinations, accommodations, activities, visiting attractions, travel modes, eating options, destination areas, and routes. However, it is worth mentioning that they reported the data collection as their biggest limitation, which was made entirely manually, but also the amount of available personal data about travellers, actually hardly null. From this century on, with continuously increasing rates of new users on the web, surrounded by the beginning of the mobile age, the problem of lack of data faced in the 90's in the projects about the recommendation in the tourism sector is not a problem anymore. This section describes a summary of the main techniques used in RSs. Content-Based (CB): Essentially, a CB RS learns to recommend items that are similar to those the user has liked in the past. The similarity of items is calculated based on the features associated to the compared items. The main advantage of this technique is the "user independence", given that it depends only on the user's own data; in other words, it identifies the common characteristics of items that have received a favourable rating from a user u, and then it recommends to u new items that share those characteristics BIB003 BIB004 BIB001 . For example, when a user rated (positively) a point of interest (POI), the system can recommend similar POIs by calculating how similar these two POIs are according to their features. Collaborative Filtering (CF): It is the process of filtering or evaluating items using the opinions of other people BIB006 . These opinions can be obtained explicitly from users through form responding, or by using some implicit measures, such as records of previous purchasing. That is, CF is an algorithm for matching people with similar interests for the purpose of making recommendations . For instance, a system may recommend a customer who travelled to Paris and Barcelona, to travel to Rome, because other users that travelled to Paris and/or Barcelona, travelled to Rome as well. Two types of CF algorithms can be found: (1) memory-based CF, where user rating data is used to compute the similarity between users or items and (2) model-based CF, where models are developed using different data mining and machine learning algorithms to predict users' rating of unrated items. Knowledge-Based (KB): This technique works by recommending items based on specific domain knowledge about how certain item features meet users' needs and preferences and, ultimately, how the item is useful for the user . In other words, it generates recommendations to the user based on the knowledge about his needs towards a particular item. These recommendations are performed under measures of utility, derived from the knowledge of the relationship between a specific user and item. For instance, a KB tourism RS will generate recommendations not only based on the past travel experience of the user, but also based on what are the characteristics of the places/cities visited and the places available to recommend, that is, a KB RS exploits knowledge to map a user to the products he likes. They can use a wide range of techniques and, at the same time, they require a big effort in terms of knowledge extraction, representation and system design. Demographic Filtering (DF): Essentially, this algorithm recommends items based on the demographic profile of the user (Bobadilla et al, 2013) . In other words, this technique provides different recommendations for different demographic niches, combining the ratings of users in these niches BIB005 . Finally, we also find hybrid RSs which are based on the combination of the above mentioned techniques ) (or some others, because this is not an exhaustive list). A hybrid RS combines techniques "X" and "Y" trying to enhance the advantages of "X" to mitigate the disadvantages of "Y" (and vice versa). Nowadays, there is a great variety of techniques, models, algorithms, etc. that are used in different RSs. For example, the context-aware RSs, that characterise the situation of an entity (person, place or object) that is considered relevant to the interaction between a user and an application, including the user and the application itself BIB002 . For instance, in a tourism RS, the context referring to the season in which a person is going to travel is important because recommendations of destinations in winter should be very different from those provided in summer BIB001 .
|
Recommendation Systems for Tourism Based on Social Networks: A Survey <s> Social Networks <s> Social network sites (SNSs) are increasingly attracting the attention of academic and industry researchers intrigued by their affordances and reach. This special theme section of the Journal of Computer-Mediated Communication brings together scholarship on these emergent phenomena. In this introductory article, we describe features of SNSs and propose a comprehensive definition. We then present one perspective on the history of such sites, discussing key changes and developments. After briefly summarizing existing scholarship concerning SNSs, we discuss the articles in this special section and conclude with considerations for future research. <s> BIB001 </s> Recommendation Systems for Tourism Based on Social Networks: A Survey <s> Social Networks <s> Most of the existing recommender systems for tourism apply knowledge-based and content-based approaches, which need sufficient historical rating information or extra knowledge and suffer from the cold start problem. In this paper, a demographic recommender system is utilized for the recommendation of attractions. This system categorizes the tourists using their demographic information and then makes recommendations based on demographic classes. Its advantage is that the history of ratings and extra knowledge are not needed, so a new tourist can obtain recommendation. Focusing on the attractions on Trip Advisor, we use different machine learning methods to produce prediction of ratings, so as to determine whether these approaches and demographic information of tourists are suitable for providing recommendations. Our preliminary results show that the methods and demographic information can be used to predict tourists' ratings on attractions. But using demographic information alone can only achieve limited accuracy. More information such as textual reviews is required to improve the accuracy of the recommendation. <s> BIB002
|
SNs are means of electronic communication through which users create online communities to share information, ideas, personal messages, and other content (as videos) . To define a web page as a SN, it must cover three essential characteristics: to offer services that allow individuals to construct a public or semi-public profile within a bounded system, to articulate a list of other users with whom they share a connection, and to offer the opportunity of viewing and traversing their list of connections and those made by others within the system BIB001 . There are further definitions, such as from (Kaplan and Haenlein, 2010) , who define it as a group of internet-based applications that build on the ideological and technological foundations of Web 2.0, and allow the creation and exchange of user-generated content. Also, for the authors, the SNs are applications that enable users to connect by creating personal information profiles, inviting friends and colleagues to have access to their profiles, and sending e-mails and instant messages between each other. In brief, a SN is a structure composed of people or organisations that share values and common goals. Figure 1 represents, on the one hand, the individual means of communication (1 to 1), like, for example, phones and internet telephony service providers (such as Skype); and, on the other hand, the mass media (1 to n) like TV, radio, printed or online newspapers and magazines. Finally, if these two scenarios are combined, SNs (n to n) emerge, as we know them today. BIB002 Since their creation, SNs have been producing an astonishing amount of data, as previously mentioned. Such growth is not merely regarding the available content, but also the growing use of internet and consequently of SNs. For instance, in the middle of 2015, Facebook reached a 1.5 billion of users who have used it at least once in a month; this means that one in seven people in the world connected to Facebook in 2015. Nowadays, even with increasingly restrictive policies, it is possible to obtain not only standard data widely used in traditional forms (i.e. name, age, gender, marital status) but also information extremely "intimate" about users, as personal preferences, likes, past trips or even where the person wants to travel to. With such valuable information available in SNs, we understand they can enrich and improve the predictions of RSs in the tourism sector.
|
Recommendation Systems for Tourism Based on Social Networks: A Survey <s> Methodology <s> In this paper, we propose a method: Context Rank, which utilizes the vast quantity of geo tagged photos in photo sharing website to recommend travel locations. To enhance the personalized recommendation performance, our method exploits different context information of photos, such as textual tags, geo tags, visual information, and user similarity. Context Rank first detects landmarks from photos' GPS locations, and estimates the popularity of each landmark. Within each landmark, representative photos and tags are extracted. Furthermore, Context Rank calculates the user similarity based on users' travel history. When a user's geo tagged photos are given, the landmark popularity, representative photos and tags, and the user similarity are used to predict the user preference of a landmark from different aspects. Finally a learning to rank algorithm is introduced to combine different preference predictions to give the final recommendation. Experiments performed on a dataset collected from Panoramio show that the Context Rank can obtain a better result than the state-of-the-art method. <s> BIB001 </s> Recommendation Systems for Tourism Based on Social Networks: A Survey <s> Methodology <s> Due to the recent wide spread of camera devices with GPS, the number of geotagged photos on the Web is increasing rapidly. Some image retrieval systems and travel recommendation systems which make use of geotagged images on the Web have been proposed so far. While most of them handle a large number of geotagged images as a set of location points, in this paper we handle them as sequences of location points. We propose a travel route recommendation system which utilizes actual travel paths extracted from a large number of photos uploaded by many people on the Web. <s> BIB002 </s> Recommendation Systems for Tourism Based on Social Networks: A Survey <s> Methodology <s> On-line photo sharing services allow users to share their touristic experiences. Tourists can publish photos of interesting locations or monuments visited, and they can also share comments, annotations, and even the GPS traces of their visits. By analyzing such data, it is possible to turn colorful photos into metadata-rich trajectories through the points of interest present in a city. ::: ::: In this paper we propose a novel algorithm for the interactive generation of personalized recommendations of touristic places of interest based on the knowledge mined from photo albums and Wikipedia. The distinguishing features of our approach are multiple. First, the underlying recommendation model is built fully automatically in an unsupervised way and it can be easily extended with heterogeneous sources of information. Moreover, recommendations are personalized according to the places previously visited by the user. Finally, such personalized recommendations can be generated very efficiently even on-line from a mobile device. <s> BIB003 </s> Recommendation Systems for Tourism Based on Social Networks: A Survey <s> Methodology <s> The paper proposes a description of information decision support system in the tourism domain and a set of methods and algorithms for generating recommendations for a user that allow significant increase of the system usability. The system generates for the user recommendations which attractions at the moment are better to attend based on the user preferences and the current situation in the location area. The system also allows showing the user information about interesting attraction in more detail, which is based on analyzing information evaluations made by other users. <s> BIB004 </s> Recommendation Systems for Tourism Based on Social Networks: A Survey <s> Methodology <s> The fast development of Web technologies has introduced a world of big data. How efficiently and effectively to retrieve the information from the ocean of data that the users really want is an important topic. Recommendation systems have become a popular approach to personalized information retrieval. On the other hand, social media have quickly entered into your life. The information from social networks can be an effective indicator for recommender systems. In this paper we present a recommendation mechanism which calculates similarity among users and users' trustability and analyzes information collected from social networks. To validate our method an information system for tourist attractions built on this recommender system has been presented. We further evaluate our system by experiments. The results show our method is feasible and effective. <s> BIB005 </s> Recommendation Systems for Tourism Based on Social Networks: A Survey <s> Methodology <s> Popularity of social networking services (SNS) and location-based SNS (LBSNS) have an influence on lifestyles of many people. Furthermore, the advancement of mobile technology enables people to share their interests and lifestyles to their friends conveniently. These factors cause the Internet to become massive personal information resource. A mobile personalized recommendation (MPR) engine plays an important role in offering solely essential information to prevent information overload for the users. Unfortunately, processing time of traditional MPR engine is high. This paper proposes an approach to improve performance of MPR using multithread programming (MP). The experimental results indicate that the multithread programming (MP) could deliver higher performance than sequential programming (SP), especially speedup between 5 and 7 times approximately. <s> BIB006 </s> Recommendation Systems for Tourism Based on Social Networks: A Survey <s> Methodology <s> Increasingly manufacturers of smartphone devices are utilising a diverse range of sensors. This innovation has enabled developers to accurately determine a user's current context. In recent years there has also been a renewed requirement to use more types of context and reduce the current over-reliance on location as a context. Location based systems have enjoyed great success and this context is very important for mobile devices. However, using additional context data such as weather, time, social media sentiment and user preferences can provide a more accurate model of the user's current context. One area that has been significantly improved by the increased use of context in mobile applications is tourism. Traditionally tour guide applications rely heavily on location and essentially ignore other types of context. This has led to problems of inappropriate suggestions, due to inadequate content filtering and tourists experiencing information overload. These problems can be mitigated if appropriate personalisation and content filtering is performed. The intelligent decision making that this paper proposes with regard to the development of the VISIT [17] system, is a hybrid based recommendation approach made up of collaborative filtering, content based recommendation and demographic profiling. Intelligent reasoning will then be performed as part of this hybrid system to determine the weight/importance of each different context type. <s> BIB007 </s> Recommendation Systems for Tourism Based on Social Networks: A Survey <s> Methodology <s> We are developing a recommender system for tourist spots. The challenge is mainly to characterize tourist spots whose features change dynamically with trends, events, season, and time of day. Our method uses a one-class support vector machine (OC-SVM) to detect the regions of substantial activity near target spots on the basis of tweets and photographs that have been explicitly geotagged. A tweet is regarded as explicitly geotagged if the text includes the name of a target spot. A photograph is regarded as explicitly geotagged if the title includes the name of a target spot. To characterize the tourist spots, we focus on geotagged tweets, which are rapidly increasing on the Web. The method takes unknown geotagged tweets originating in activity regions and maps these to target spots. In addition, the method extracts features of the tourist spots on the basis of the mapped tweets. Finally, we demonstrate the effectiveness of our method through qualitative analyses using real datasets on the Kyoto area. <s> BIB008 </s> Recommendation Systems for Tourism Based on Social Networks: A Survey <s> Methodology <s> In recent years, million geo-tagged photos are available in online web service like Flickr, panoramio, etc. People contributing geo-tagged photo and share their travel experiences these media. The photo itself has important information sharing reveals like location, time, tags, title, and weather. We recommend the new method locations travel for tourists according their time and their preference. We get travel user preference according his/her past time in one city and recommendation another city. We examine our technique collect dataset from Flickr publically available and taken different cities of china. Experiment results show that our travel recommendation method according to tourist time capable to predict tourist location recommendation famous places or new places more precise and give better recommendation compare to state of art landmarks recommendation method and personalized travel method. <s> BIB009 </s> Recommendation Systems for Tourism Based on Social Networks: A Survey <s> Methodology <s> In this paper, we consider a tourism recommender system which can recommend sightseeing spots for users who wish to make a travel plan for a designated time period such as early autumn and Christmas vacation. A key issue in realizing such seasonal recommendations is how to calculate feature vector of each spot which would vary depending on the time of travel. We propose a two-phase scheme which generates seasonal feature vectors for each sightseeing spot. In the first phase, the basic feature vector is generated for each spot using the description of Wikipedia and the TF-IDF weights. In the second phase, seasonal feature vectors are generated for each spot by referring to the distribution of keywords contained in tweets associated with spots for each season. The performance of the scheme is evaluated via experiments using actual data set drawn from Wikipedia and Twitter. <s> BIB010 </s> Recommendation Systems for Tourism Based on Social Networks: A Survey <s> Methodology <s> Photo sharing sites like Flickr and Instagram have grown increasingly popular in recent years, resulting in a large amount of uploaded photos. In addition, these photos contain useful meta-data such as the taken time and geo-location. Using such geo-tagged photos and Wikipedia, we propose an approach for recommending tours based on user interests from his/her visit history. We evaluate our proposed approach on a Flickr dataset comprising three cities and find that our approach is able to recommend tours that are more popular and comprise more places/points-of-interest, compared to various baselines. More importantly, we find that our recommended tours reflect the ground truth of real-life tours taken by users, based on measures of recall, precision and F1-score. <s> BIB011 </s> Recommendation Systems for Tourism Based on Social Networks: A Survey <s> Methodology <s> Travel recommendation systems can tackle the problem of information overload and recommend proper attractions on the basis of users' preferences. Most existing travel recommendation systems utilized travel history, yet neglected the low frequency of tourism and the flexible styles of attractions in different cities, which will cause the inaccuracy in both collaborative filtering recommendation and content-based recommendation. To deal with this issue, we propose a novel personalized travel recommendation framework by leveraging explicit user interaction and multi-modality travel information. As far as we known, it is the first time that attractions are recommended by user interaction and collective intelligence in a unified framework. Specifically, we first collect heterogeneous travel information by multi-user sharing, which is regarded as collective intelligence to provide reliable references by other travelers. Second, valuable knowledge is mined from collective intelligence in order to filter out the noisy data and make travel information structured. Then, personalized attraction similarity (PAS) model is designed to suggest attractions through fusing heterogeneous information with weighted adaptation and simultaneously considering explicit user interaction. Finally, context information such as the user's location is well adopted to refine the recommendation that may influence the user's choice at a particular moment. Experimental results on pseudo-relevance data and real-world data demonstrate that our method gains promising performance in terms of effectiveness as well as efficiency. HighlightsThis paper proposes a framework of personalized attraction recommendation in tourism.It mines collective intelligence from heterogeneous travel multimedia on social media.PAS-model is employed to recommend similar attractions with explicit interaction.We can fuse heterogeneous collective intelligence with weight-adaptation.Context information is considered to refine the final results for freshness and surprise. <s> BIB012
|
As explained above, our aim in this survey is to analyse existing works on tourism RSs that use data from SNs. Our search of scientific papers was performed by means of a filtering process in several databases such as: ACM Digital Library, IEEE Xplore, dblp, Emerald, Springer Link, Science Direct, Web of Science, Scopus, Dialnet plus, among other open source databases like DOAJ. Only articles and e-books were selected as document types, and we only selected RSs, searched as "recommend*", oriented/aimed to the tourism sector ("touris*") that used some type of data from SN in their model ("social network*"). Fig. 2 The growth of online data and tourism RSs research based on SNs. Figure 2 shows a summary of the result of this search. Here, we can observe the relation between the number of scientific papers found in our search, as well as the amount of online available data. The number of publications was insignificant until the year 2004, so they are not depicted in Figure 2 . However, with the expansion of the use of SNs in 2004, data (text, video, audio and other files) started to increase, albeit the amount of related papers oscillated between 3 and 14 in the subsequent 5 years. From 2009 onwards, the growth in the research represented in Figure 2 is clear, which can be associated to the growth in the volume of data available online in zetabytes (thanks to the inclusion of new devices such as tablets and the increasing of the number of smartphones), in addition to the launching/release of APIs for the main SNs. In 2009 and 2010, the scientific papers found rised from 14 to 24 and kept growing until reaching the peak of 59 papers in 2015. In the following year, though, only 31 projects were found, which could be related to the limitation on the access to the main SNs' data through their APIs. An example of this data limitation is Facebook, that limited the access to users' data in 2015 . Other SNs such as Twitter and Linkedin are also putting restrictions in the data accessible through their APIs. In parallel, we should not be disregarded that we can see an increase in the volume of data (text, video, audio and others files) from 2004 to 2016, when we reached 14 zettabytes of data generated on the internet . From the 312 papers that fulfilled our search parameters, we selected 31 papers to be deeply analysed, following three criteria: those which used the most known SNs (based on the number of users); those focused on the development of a practical application, that is, real RSs; and those with a relevant number of citations. Tables 1 and 2 show the list of selected papers descendingly ordered by the year of publication. Once defined the target work to be studied, we classified these papers by the different aspects that we wanted to analyse, which are shown in the columns of Tables 1 and 2 . Specifically, these aspects are: 1. SNs, that is, from which SN data is extracted. 2. Other data sources, that is, additional data sources used in these papers (if any). 3. Extracted items, that is, which type of information is extracted from SNs. 4. Recommendation technique, which indicates the several recommendation techniques used in each system. 5. Evaluation, indicating whether the evaluation of the system was performed using synthetic data or real users. 6. Recommendation system properties, describing which desirable properties are pursued in each system, such as accuracy, serendipity, etc. 7. Output, regarding whether the RS shows a list of POI recommendations, a route or a guide. 8. Interface used by the user to interact with the RS. We also discuss the relevance of each aspect and the positives of the main systems, which will be detailed in the next sections. This section gives a brief review of the main SNs employed in tourism RSs in the last years, which are summarised in the second column of Table 1 . We found projects that work with widely used SNs such as Facebook and Twitter, and others focused on a more specialised audience such as Flickr 1 , that allows the user to store, search, sell and share photos or videos; Foursquare 2 , a local search-anddiscovery service mobile app which provides personalised recommendations of places to go to near a user's current location based on users' previous browsing history, purchases, or check-in history and Traveleye 3 , focused on trips organisation, that allows users to write posts with travel experiences, to follow other travellers' journeys, to share travels with friends, to search tourist attractions and travel guides, etc. We also have found works that used no longer available SNs, such as Picasa (Lemos et al, 2010) and Panoramio, which was a was a geo-located tagging, photo sharing mashup, acquired by Google in 2007. In relation to the analysed projects, we can observe that Flickr was the most used, with 58% of the projects BIB012 BIB009 , among others; then, Panoramio with 23% BIB004 BIB001 . The advantage of these SNs is that they enable the collection of "Coordinates", some "Geotag Labels" and even data about the person who took the photo, providing researchers with interesting data for RSs. Facebook and Twitter (1st and 5th most used in the world ) are used only in the 10% of the analysed works. This low rate can be explained with the fact that both are generalist SNs and, therefore, data related to tourism is more difficult to obtain. Several works BIB005 BIB006 have used Facebook to obtain numbers of likes, groups, friends, comments or geotags of check-ins. With regard to Twitter, the core data are the user tweets, retrieved with different goals. For instance, BIB010 considered the concept of sightseeing spots for different seasons, thus generating seasonal feature vectors for each sightseeing spot, which could support context-aware recommendation of tourist spots depending on the time of the year. Tweets also can be used to characterise the tourist spots BIB008 , or be combined with sentiment analysis to determine the current "mood" of each tourist BIB007 . (Van Canneyt et al, 2011) opted to work with Twitter and Traveleye in their project. The first was employed for inferring the sentiment analysis merged with context-aware (location, weather and time) data, while Traveleye was used to extract the moment when the user visited a given city. Figure 3 shows the temporal evolution of some of the SNs that stand out in the recommendation projects oriented to the tourism sector since 2006, relating their appearance with the number of papers found in our search. In the middle of 2006, Twitter released an Application Programming Interface (API) for easing the access to data. Nowadays, all the major SNs have their own APIs, which allow to obtain data in an organised and automated way, by means of function calls. Combined with OAuth 4 , released in 2008, APIs enable wider approaches of user integration, besides to add value to the user, the developer and the application. We observe that a number of projects, independent from the SN used, started to appear in 2008 and kept growing until 2015. Specifically, Facebook was used in 16 papers in 2015, followed by Flickr and Twitter, with 12 and 8 papers, respectively. One of the components that boosted such growth could be the ripeness of the available technologies with the definition of new standards, protocols and the documentation for their platforms. This opened an opportunity, even for non-IT researchers, to have access to data, integrate systems and develop new tools in a quick and simplified way. The reduction in the publications registered in 2016 seems to be related to the limitation on the access to users' data imposed by the main SNs, as explained previously. In summary, Figure 3 shows the relation between the ease to access data from SNs after the standardization of access and authentication (APIs, OAuth, etc.) and the volume of published papers that use these SNs. On the other hand, Table 1 shows, in the third column, other data sources used in the analysed papers, as a complementary source. Most of the analysed projects used these additional data sources for showing the recommendations on a map. For example, BIB002 Sun et al, 2013) used Yahoo Maps, Google Maps and OpenstreetMaps, respectively. Others, such as BIB011 used Wikipedia to extract the list of POIs, latitude/longitude coordinates, and interest categories. BIB003 considered that the advantage of using Wikipedia is twofold. They used it, in one hand, to identify a large number of POIs in every city (even the less popular ones) and, on the other hand, to provide additional structured information about the POI (e.g. a subdivision of categories). (Sun et al, 2013 ) have chosen to use TripAdvisor to obtain a dictionary of landmarks. In this same scenario, BIB012 have used Tripadvisor to retrieve user comments about candidate attractions, besides the user rating about each attraction. BIB004 , in addition to Wikipedia and Panoramio, have also used another data source, the Wikivoyage, to obtain detailed information about the attraction.
|
Recommendation Systems for Tourism Based on Social Networks: A Survey <s> What data are extracted from social networks? <s> This paper presents Near2me, a prototype system implementing a travel recommender concept that generates recommendations that are not only personalized, but also authentic. Exploitation of implicit situational knowledge makes it possible for Near2me to recommend places that are not necessarily touristic or famous, but rather are genuinely representative of place and also match users' personal interests. The system allows users to explore, evaluate, and understand recommendations, control recommendation direction and discover informative supporting material. This functionality makes it possible for users to assess recommendations and confirm their suitability and authentic nature. The recommendation system makes use of user photos from the image sharing community Flickr. We take the position that a social media-based environment incorporating multimedia content items, user-contributed annotations and social network connections is uniquely suited to providing users with authentic, personalized recommendations. First results of a user study allow us to conclude that users are interested in exploring locations, topics, and people from different perspectives and confirm authenticity as a relevance criterion. <s> BIB001 </s> Recommendation Systems for Tourism Based on Social Networks: A Survey <s> What data are extracted from social networks? <s> In this paper, we propose a method: Context Rank, which utilizes the vast quantity of geo tagged photos in photo sharing website to recommend travel locations. To enhance the personalized recommendation performance, our method exploits different context information of photos, such as textual tags, geo tags, visual information, and user similarity. Context Rank first detects landmarks from photos' GPS locations, and estimates the popularity of each landmark. Within each landmark, representative photos and tags are extracted. Furthermore, Context Rank calculates the user similarity based on users' travel history. When a user's geo tagged photos are given, the landmark popularity, representative photos and tags, and the user similarity are used to predict the user preference of a landmark from different aspects. Finally a learning to rank algorithm is introduced to combine different preference predictions to give the final recommendation. Experiments performed on a dataset collected from Panoramio show that the Context Rank can obtain a better result than the state-of-the-art method. <s> BIB002 </s> Recommendation Systems for Tourism Based on Social Networks: A Survey <s> What data are extracted from social networks? <s> Recommendation systems provide focused information to users on a set of objects belonging to a specific domain. The proposed recommender system provides personalized suggestions about touristic points of interest. The system generates recommendations, consisting of touristic places, according to the current position of a tourist and previously collected data describing tourist movements in a touristic location/city. The touristic sites correspond to a set of points of interest identified a priori. We propose several metrics to evaluate both the spatial coverage of the dataset and the quality of recommendations produced. We assess our system on two datasets: a real and a synthetic one. Results show that our solution is a viable one. <s> BIB003 </s> Recommendation Systems for Tourism Based on Social Networks: A Survey <s> What data are extracted from social networks? <s> Due to the recent wide spread of camera devices with GPS, the number of geotagged photos on the Web is increasing rapidly. Some image retrieval systems and travel recommendation systems which make use of geotagged images on the Web have been proposed so far. While most of them handle a large number of geotagged images as a set of location points, in this paper we handle them as sequences of location points. We propose a travel route recommendation system which utilizes actual travel paths extracted from a large number of photos uploaded by many people on the Web. <s> BIB004 </s> Recommendation Systems for Tourism Based on Social Networks: A Survey <s> What data are extracted from social networks? <s> Trip planning is generally a very time-consuming task due to the complex trip requirements and the lack of convenient tools/systems to assist the planning. In this paper, we propose a travel path search system based on geo-tagged photos to facilitate tourists' trip planning, not only for where to visit but also how to visit. The large scale geo-tagged photos that are public ally available on the web make this system possible, as geo-tagged photos encode rich travel-related metadata and can be used to mine travel paths from previous tourists. In this work, about 20 million geo-tagged photos were crawled from Panoramio.com. Then a substantial number of travel paths are minded from the crawled geo-tagged photos. After that, a search system is built to index and search the paths, and the Sparse Chamfer Distance is proposed to measure the similarity of two paths. The search system supports various types of queries, including (1) a destination name, (2) a user-specified region on the map, (3) some user-preferred locations. Based on the search system, users can interact with the system by specifying a region or several interest points on the map to find paths. Extensive experiments show the effectiveness of the proposed framework. <s> BIB005 </s> Recommendation Systems for Tourism Based on Social Networks: A Survey <s> What data are extracted from social networks? <s> The fast development of Web technologies has introduced a world of big data. How efficiently and effectively to retrieve the information from the ocean of data that the users really want is an important topic. Recommendation systems have become a popular approach to personalized information retrieval. On the other hand, social media have quickly entered into your life. The information from social networks can be an effective indicator for recommender systems. In this paper we present a recommendation mechanism which calculates similarity among users and users' trustability and analyzes information collected from social networks. To validate our method an information system for tourist attractions built on this recommender system has been presented. We further evaluate our system by experiments. The results show our method is feasible and effective. <s> BIB006 </s> Recommendation Systems for Tourism Based on Social Networks: A Survey <s> What data are extracted from social networks? <s> Increasingly manufacturers of smartphone devices are utilising a diverse range of sensors. This innovation has enabled developers to accurately determine a user's current context. In recent years there has also been a renewed requirement to use more types of context and reduce the current over-reliance on location as a context. Location based systems have enjoyed great success and this context is very important for mobile devices. However, using additional context data such as weather, time, social media sentiment and user preferences can provide a more accurate model of the user's current context. One area that has been significantly improved by the increased use of context in mobile applications is tourism. Traditionally tour guide applications rely heavily on location and essentially ignore other types of context. This has led to problems of inappropriate suggestions, due to inadequate content filtering and tourists experiencing information overload. These problems can be mitigated if appropriate personalisation and content filtering is performed. The intelligent decision making that this paper proposes with regard to the development of the VISIT [17] system, is a hybrid based recommendation approach made up of collaborative filtering, content based recommendation and demographic profiling. Intelligent reasoning will then be performed as part of this hybrid system to determine the weight/importance of each different context type. <s> BIB007 </s> Recommendation Systems for Tourism Based on Social Networks: A Survey <s> What data are extracted from social networks? <s> We are developing a recommender system for tourist spots. The challenge is mainly to characterize tourist spots whose features change dynamically with trends, events, season, and time of day. Our method uses a one-class support vector machine (OC-SVM) to detect the regions of substantial activity near target spots on the basis of tweets and photographs that have been explicitly geotagged. A tweet is regarded as explicitly geotagged if the text includes the name of a target spot. A photograph is regarded as explicitly geotagged if the title includes the name of a target spot. To characterize the tourist spots, we focus on geotagged tweets, which are rapidly increasing on the Web. The method takes unknown geotagged tweets originating in activity regions and maps these to target spots. In addition, the method extracts features of the tourist spots on the basis of the mapped tweets. Finally, we demonstrate the effectiveness of our method through qualitative analyses using real datasets on the Kyoto area. <s> BIB008 </s> Recommendation Systems for Tourism Based on Social Networks: A Survey <s> What data are extracted from social networks? <s> In recent years, million geo-tagged photos are available in online web service like Flickr, panoramio, etc. People contributing geo-tagged photo and share their travel experiences these media. The photo itself has important information sharing reveals like location, time, tags, title, and weather. We recommend the new method locations travel for tourists according their time and their preference. We get travel user preference according his/her past time in one city and recommendation another city. We examine our technique collect dataset from Flickr publically available and taken different cities of china. Experiment results show that our travel recommendation method according to tourist time capable to predict tourist location recommendation famous places or new places more precise and give better recommendation compare to state of art landmarks recommendation method and personalized travel method. <s> BIB009 </s> Recommendation Systems for Tourism Based on Social Networks: A Survey <s> What data are extracted from social networks? <s> In this paper, we consider a tourism recommender system which can recommend sightseeing spots for users who wish to make a travel plan for a designated time period such as early autumn and Christmas vacation. A key issue in realizing such seasonal recommendations is how to calculate feature vector of each spot which would vary depending on the time of travel. We propose a two-phase scheme which generates seasonal feature vectors for each sightseeing spot. In the first phase, the basic feature vector is generated for each spot using the description of Wikipedia and the TF-IDF weights. In the second phase, seasonal feature vectors are generated for each spot by referring to the distribution of keywords contained in tweets associated with spots for each season. The performance of the scheme is evaluated via experiments using actual data set drawn from Wikipedia and Twitter. <s> BIB010 </s> Recommendation Systems for Tourism Based on Social Networks: A Survey <s> What data are extracted from social networks? <s> Photo sharing sites like Flickr and Instagram have grown increasingly popular in recent years, resulting in a large amount of uploaded photos. In addition, these photos contain useful meta-data such as the taken time and geo-location. Using such geo-tagged photos and Wikipedia, we propose an approach for recommending tours based on user interests from his/her visit history. We evaluate our proposed approach on a Flickr dataset comprising three cities and find that our approach is able to recommend tours that are more popular and comprise more places/points-of-interest, compared to various baselines. More importantly, we find that our recommended tours reflect the ground truth of real-life tours taken by users, based on measures of recall, precision and F1-score. <s> BIB011 </s> Recommendation Systems for Tourism Based on Social Networks: A Survey <s> What data are extracted from social networks? <s> Travel recommendation systems can tackle the problem of information overload and recommend proper attractions on the basis of users' preferences. Most existing travel recommendation systems utilized travel history, yet neglected the low frequency of tourism and the flexible styles of attractions in different cities, which will cause the inaccuracy in both collaborative filtering recommendation and content-based recommendation. To deal with this issue, we propose a novel personalized travel recommendation framework by leveraging explicit user interaction and multi-modality travel information. As far as we known, it is the first time that attractions are recommended by user interaction and collective intelligence in a unified framework. Specifically, we first collect heterogeneous travel information by multi-user sharing, which is regarded as collective intelligence to provide reliable references by other travelers. Second, valuable knowledge is mined from collective intelligence in order to filter out the noisy data and make travel information structured. Then, personalized attraction similarity (PAS) model is designed to suggest attractions through fusing heterogeneous information with weighted adaptation and simultaneously considering explicit user interaction. Finally, context information such as the user's location is well adopted to refine the recommendation that may influence the user's choice at a particular moment. Experimental results on pseudo-relevance data and real-world data demonstrate that our method gains promising performance in terms of effectiveness as well as efficiency. HighlightsThis paper proposes a framework of personalized attraction recommendation in tourism.It mines collective intelligence from heterogeneous travel multimedia on social media.PAS-model is employed to recommend similar attractions with explicit interaction.We can fuse heterogeneous collective intelligence with weight-adaptation.Context information is considered to refine the final results for freshness and surprise. <s> BIB012
|
Recommender Systems mainly need two types of information: information about user tastes and preferences and information about the items to recommend. In our analysis, we have noted that SNs are used for retrieving both. Regarding items, SNs can be used for discovering new items or for adding additional characteristics to existing items in the RS database. In the last column of Table 1 , we can see that, regardless the type of SN used in the reviewed projects, the collected data are quite similar: 87% of them use an SN for obtaining "Geotag Photos", that is, labels that contain the geographical identification metadata, such as latitude and longitude coordinates, though they can also include altitude, bearing and distance, accuracy data, like BIB003 BIB009 , among others; 71% extract "Geotag Labels", which are labels indicating the name of the city, country, address or labels that describe the photo, fundamental in projects such as BIB009 ; and, finally, the "Geotag Timestamp", which indicates when a photo (for example) was taken, is used in 45% of the projects BIB004 BIB005 . Less used, but also important, is textual information such as "Comments" and "Tweets", which are used to extract keywords/labels commonly exploited in projects of text mining and sentiment analysis. "Comments" were used in 16% of projects, like BIB001 BIB002 or BIB006 , which extracted items shared by the user in Facebook along with likes, comments and ratings. On the other hand, BIB010 BIB007 BIB008 worked with tweets in their projects. Only 6% of papers used a "Geotag Weather", tags that contain weather information for a particular location, which helps the development of context-aware systems BIB011 BIB008 BIB003 ; the same figure has the "Rating Items" BIB012 BIB006 , that means, the extraction of ratings such as online evaluations made by the users, which indicates their level of satisfaction (e.g. stars, ranking, likes) regarding restaurants, hotels, cities, POIs, routes, etc. Table 1 shows that many projects combine several of these data. For instance, BIB012 collected heterogeneous data source using Flickr, Tripadvisor and Wikitravel: from Flickr, photos with metadata (time, location, attraction, and User ID); from Tripadvisor, user comments about the candidate attractions and user rating about each attraction; and from Wikitravel, official travelogues. Such heterogeneity reflects in the system a performance gain in terms of effectiveness as well as efficiency; an example is the "coordinates" of POIs, which is combined with "comments" and "rating items" from TripAdvisor, with the aim of learning from the experience of tourists who already visited the POI. In this case, collective intelligence is first gathered from a large amount of user-generated content in social media. Also, different aspects of knowledge can be mined from collective intelligence for denoising data and structuring heterogeneous information. In BIB008 , three different data sources were used: Foursquare, to obtain POI names, coordinates and category; Twitter, for the date, hour and coordinates of the visit; and Panoramio to obtain a POI photo with the title, coordinates and owner. In other words, three types of datasets were used: tourist spots, geotagged tweets, and geotagged photographs to generate a method for mapping geotagged tweets to tourist spots on the basis of the substantial activity regions of the spots and also for extracting temporal features and phrasal features based on the mapped tweets, with a positive level of effectiveness according to experiments developed. The second target when obtaining data from SN is focused on the discovery of behavioural patterns, preferences, and personal characteristics of users. In this case, it is valuable the extraction of user profiles, friends, and comments, which are the three key components of SNs (Danah and Nicole, 2007) . For example, ) recommend attractions that are likely to fit the current user expectations by exploiting the information exposed by user preferences; here, they based on the current user profile of the SN OpenSocial 5 , which determines the common characteristics of the previously visited places and the user behaviour. Within this project, several elements were extracted from the SN: (1) Coordinates, which means knowing the user's location, allowing the offering of a set of places, but also the detection of contacts or friends in the surrounding areas; (2) Time and weather, to recommend indoor locations when the weather does not give any other possibility, also taking into consideration timetable restrictions of attractions; (3) The users' profile through the explicit interaction of the user, determining what their interests are, what kind of places they prefer to visit, and the ratings given to attractions; but also through the implicit data retrieval, collecting information regarding favourite painters, writers, or music preferences, for instance. In this same line, the VISIT project BIB007 used five types of contextual data, which are location, time, weather, social media sentiment and personalisation. The location is extracted from three main location sensing techniques used outdoors: GPS, GSM and WiFi; time, calculated from the amount of time that a user stays at each attraction; weather, extracted from the WorldWeatherOnline; social media sentiment, performed on Twitter messages (tweets) in real time to determine to current mood of each tourist attraction; and personalisation, by using the user profile data to describe a person in terms of age, gender, relationship status and the number of children, which can be used as a starting point for the application when first launched with no previous history. As we can see, data extraction can be performed over a unique or multiple SNs, and in each case, one or more pieces of information about items that can be extracted. In addition, generally the use of data from SNs have some interesting advantages, such as the fact of counting on real data, the chance to later make tests with the users and also the availability of well-defined APIs provided by the most important SNs, thus making the development of their projects easier. However, we observe that regardless the SN, researchers face the same problem: irrelevant or false data, not only in case of users that insert or dismiss such information, but also with respect to those responsible for the development of SNs, who sometimes do not specify well the categories or standards neither establish required fields.
|
Recommendation Systems for Tourism Based on Social Networks: A Survey <s> What recommendation techniques are used? <s> The ability to create geotagged photos enables people to share their personal experiences as tourists at specific locations and times. Assuming that the collection of each photographer's geotagged photos is a sequence of visited locations, photo-sharing sites are important sources for gathering the location histories of tourists. By following their location sequences, we can find representative and diverse travel routes that link key landmarks. In this paper, we propose a travel route recommendation method that makes use of the photographers' histories as held by Flickr. Recommendations are performed by our photographer behavior model, which estimates the probability of a photographer visiting a landmark. We incorporate user preference and present location information into the probabilistic behavior model by combining topic models and Markov models. We demonstrate the effectiveness of the proposed method using a real-life dataset holding information from 71,718 photographers taken in the United States in terms of the prediction accuracy of travel behavior. <s> BIB001 </s> Recommendation Systems for Tourism Based on Social Networks: A Survey <s> What recommendation techniques are used? <s> This paper presents Near2me, a prototype system implementing a travel recommender concept that generates recommendations that are not only personalized, but also authentic. Exploitation of implicit situational knowledge makes it possible for Near2me to recommend places that are not necessarily touristic or famous, but rather are genuinely representative of place and also match users' personal interests. The system allows users to explore, evaluate, and understand recommendations, control recommendation direction and discover informative supporting material. This functionality makes it possible for users to assess recommendations and confirm their suitability and authentic nature. The recommendation system makes use of user photos from the image sharing community Flickr. We take the position that a social media-based environment incorporating multimedia content items, user-contributed annotations and social network connections is uniquely suited to providing users with authentic, personalized recommendations. First results of a user study allow us to conclude that users are interested in exploring locations, topics, and people from different perspectives and confirm authenticity as a relevance criterion. <s> BIB002 </s> Recommendation Systems for Tourism Based on Social Networks: A Survey <s> What recommendation techniques are used? <s> In this paper, we propose a method: Context Rank, which utilizes the vast quantity of geo tagged photos in photo sharing website to recommend travel locations. To enhance the personalized recommendation performance, our method exploits different context information of photos, such as textual tags, geo tags, visual information, and user similarity. Context Rank first detects landmarks from photos' GPS locations, and estimates the popularity of each landmark. Within each landmark, representative photos and tags are extracted. Furthermore, Context Rank calculates the user similarity based on users' travel history. When a user's geo tagged photos are given, the landmark popularity, representative photos and tags, and the user similarity are used to predict the user preference of a landmark from different aspects. Finally a learning to rank algorithm is introduced to combine different preference predictions to give the final recommendation. Experiments performed on a dataset collected from Panoramio show that the Context Rank can obtain a better result than the state-of-the-art method. <s> BIB003 </s> Recommendation Systems for Tourism Based on Social Networks: A Survey <s> What recommendation techniques are used? <s> On-line photo sharing services allow users to share their touristic experiences. Tourists can publish photos of interesting locations or monuments visited, and they can also share comments, annotations, and even the GPS traces of their visits. By analyzing such data, it is possible to turn colorful photos into metadata-rich trajectories through the points of interest present in a city. ::: ::: In this paper we propose a novel algorithm for the interactive generation of personalized recommendations of touristic places of interest based on the knowledge mined from photo albums and Wikipedia. The distinguishing features of our approach are multiple. First, the underlying recommendation model is built fully automatically in an unsupervised way and it can be easily extended with heterogeneous sources of information. Moreover, recommendations are personalized according to the places previously visited by the user. Finally, such personalized recommendations can be generated very efficiently even on-line from a mobile device. <s> BIB004 </s> Recommendation Systems for Tourism Based on Social Networks: A Survey <s> What recommendation techniques are used? <s> Recommendation systems provide focused information to users on a set of objects belonging to a specific domain. The proposed recommender system provides personalized suggestions about touristic points of interest. The system generates recommendations, consisting of touristic places, according to the current position of a tourist and previously collected data describing tourist movements in a touristic location/city. The touristic sites correspond to a set of points of interest identified a priori. We propose several metrics to evaluate both the spatial coverage of the dataset and the quality of recommendations produced. We assess our system on two datasets: a real and a synthetic one. Results show that our solution is a viable one. <s> BIB005 </s> Recommendation Systems for Tourism Based on Social Networks: A Survey <s> What recommendation techniques are used? <s> The fast development of Web technologies has introduced a world of big data. How efficiently and effectively to retrieve the information from the ocean of data that the users really want is an important topic. Recommendation systems have become a popular approach to personalized information retrieval. On the other hand, social media have quickly entered into your life. The information from social networks can be an effective indicator for recommender systems. In this paper we present a recommendation mechanism which calculates similarity among users and users' trustability and analyzes information collected from social networks. To validate our method an information system for tourist attractions built on this recommender system has been presented. We further evaluate our system by experiments. The results show our method is feasible and effective. <s> BIB006 </s> Recommendation Systems for Tourism Based on Social Networks: A Survey <s> What recommendation techniques are used? <s> Increasingly manufacturers of smartphone devices are utilising a diverse range of sensors. This innovation has enabled developers to accurately determine a user's current context. In recent years there has also been a renewed requirement to use more types of context and reduce the current over-reliance on location as a context. Location based systems have enjoyed great success and this context is very important for mobile devices. However, using additional context data such as weather, time, social media sentiment and user preferences can provide a more accurate model of the user's current context. One area that has been significantly improved by the increased use of context in mobile applications is tourism. Traditionally tour guide applications rely heavily on location and essentially ignore other types of context. This has led to problems of inappropriate suggestions, due to inadequate content filtering and tourists experiencing information overload. These problems can be mitigated if appropriate personalisation and content filtering is performed. The intelligent decision making that this paper proposes with regard to the development of the VISIT [17] system, is a hybrid based recommendation approach made up of collaborative filtering, content based recommendation and demographic profiling. Intelligent reasoning will then be performed as part of this hybrid system to determine the weight/importance of each different context type. <s> BIB007 </s> Recommendation Systems for Tourism Based on Social Networks: A Survey <s> What recommendation techniques are used? <s> The paper proposes a description of information decision support system in the tourism domain and a set of methods and algorithms for generating recommendations for a user that allow significant increase of the system usability. The system generates for the user recommendations which attractions at the moment are better to attend based on the user preferences and the current situation in the location area. The system also allows showing the user information about interesting attraction in more detail, which is based on analyzing information evaluations made by other users. <s> BIB008 </s> Recommendation Systems for Tourism Based on Social Networks: A Survey <s> What recommendation techniques are used? <s> Travel recommendation systems can tackle the problem of information overload and recommend proper attractions on the basis of users' preferences. Most existing travel recommendation systems utilized travel history, yet neglected the low frequency of tourism and the flexible styles of attractions in different cities, which will cause the inaccuracy in both collaborative filtering recommendation and content-based recommendation. To deal with this issue, we propose a novel personalized travel recommendation framework by leveraging explicit user interaction and multi-modality travel information. As far as we known, it is the first time that attractions are recommended by user interaction and collective intelligence in a unified framework. Specifically, we first collect heterogeneous travel information by multi-user sharing, which is regarded as collective intelligence to provide reliable references by other travelers. Second, valuable knowledge is mined from collective intelligence in order to filter out the noisy data and make travel information structured. Then, personalized attraction similarity (PAS) model is designed to suggest attractions through fusing heterogeneous information with weighted adaptation and simultaneously considering explicit user interaction. Finally, context information such as the user's location is well adopted to refine the recommendation that may influence the user's choice at a particular moment. Experimental results on pseudo-relevance data and real-world data demonstrate that our method gains promising performance in terms of effectiveness as well as efficiency. HighlightsThis paper proposes a framework of personalized attraction recommendation in tourism.It mines collective intelligence from heterogeneous travel multimedia on social media.PAS-model is employed to recommend similar attractions with explicit interaction.We can fuse heterogeneous collective intelligence with weight-adaptation.Context information is considered to refine the final results for freshness and surprise. <s> BIB009
|
With respect to the recommendation techniques used in the papers that we have analysed, we distinguish between those using the more traditional techniques, such as content-based, collaborative filtering and knowledge-based techniques and those combining these techniques in hybrid approaches or with context-aware information. The details of each paper are shown in the first column of Table 2 , where we can observe that traditional techniques represent 48%, 16% and 13% of the works, respectively for CF, CB and KB, hybrid approaches and context-aware RSs represent 25% and 16%, respectively. Regarding the content-based technique, BIB002 ) developed a prototype called "Near2me" integrating multimedia content items, user-generated metadata as their context to convey authenticity, and personalisation to the user. We can highlight some projects that worked with collaborative filtering methods, like , which developed the national tourism web portal in Macedonia, adopting the cloud-model CF to reduce the dimensionality of data and avoid the strict matching of attributes in similarity computation. BIB003 ) presented a method named ContextRank, that calculates personalised interests for a specific user from different aspects, namely visual similarity score, textual tags similarity score and collaborative filtering score, which exploits different context information of geotagged web photos to perform personalised tourism recommendation. BIB006 calculated the similarity among users and users' network, combining collaborative filtering techniques, based on users appraisal and trustability evaluations, and social recommendations based on users' activities on SNs. Finally, other relevant models for collaborative filtering include the use of data mining models such as clustering, classification or association pattern mining, like in (Sun et al, 2013) and . The Knowledge-Based technique was used in three projects. For example, BIB004 proposed an algorithm for the interactive generation of personalised recommendations of POIs based on the knowledge mined from Flickr photos and Wikipedia. BIB005 created, on one hand, a knowledge model, used for calculating suggestions, and used, on the other hand, information of the path of a current user during a visit, that combined with the first one, allowed the system to produce a list of suggestions as possible locations to visit. In addition to these traditional techniques, we highlight hybrid RSs. For instance, BIB009 used techniques such as content-based, semantic-based and social-based knowledge; BIB007 in their hybrid project use collaborative filtering, content-based recommendation and demographic profiling. BIB001 ) introduced a hybrid RSs combining the Markov Model (using a probabilistic model that can handle sequential information) and topic models, also known as a hierarchical probabilistic model, in which a user is modelled as a mixture of topics, and a topic is modelled as a probabilistic distribution over landmarks. Regarding context-aware systems, we can mention who proposed an algorithm approach that applies a post-filtering contextual approach on a list of recommendations generated by traditional RS algorithms. Also, BIB008 developed TAIS, a mobile appli-cation that used an attraction information service, a recommendation service, a region context service, a ride-sharing service, and a public transport service. Another interesting work is the SPETA project (García ) that makes use of a variety of techniques which include context-aware, knowledge-based and social-based methods to retrieve the most suitable services. Finally, we highlight (Van Canneyt et al, 2011) who explored the possibility of using temporal context factors to better predict which POIs might be interesting to a given user.
|
Recommendation Systems for Tourism Based on Social Networks: A Survey <s> What properties of recommender systems are used? <s> Recommender systems have been evaluated in many, often incomparable, ways. In this article, we review the key decisions in evaluating collaborative filtering recommender systems: the user tasks being evaluated, the types of analysis and datasets being used, the ways in which prediction quality is measured, the evaluation of prediction attributes other than quality, and the user-based evaluation of the system as a whole. In addition to reviewing the evaluation strategies used by prior researchers, we present empirical results from the analysis of various accuracy metrics on one content domain where all the tested metrics collapsed roughly into three equivalence classes. Metrics within each equivalency class were strongly correlated, while metrics from different equivalency classes were uncorrelated. <s> BIB001 </s> Recommendation Systems for Tourism Based on Social Networks: A Survey <s> What properties of recommender systems are used? <s> Photo sharing platforms users often annotate their trip photos with landmark names. These annotations can be aggregated in order to recommend lists of popular visitor attractions similar to those found in classical tourist guides. However, individual tourist preferences can vary significantly so good recommendations should be tailored to individual tastes. Here we pose this visit personalization as a collaborative filtering problem. We mine the record of visited landmarks exposed in online user data to build a user-user similarity matrix. When a user wants to visit a new destination, a list of potentially interesting visitor attractions is produced based on the experience of like-minded users who already visited that destination. We compare our recommender to a baseline which simulates classical tourist guides on a large sample of Flickr users. <s> BIB002 </s> Recommendation Systems for Tourism Based on Social Networks: A Survey <s> What properties of recommender systems are used? <s> Recommender systems are now popular both commercially and in the research community, where many approaches have been suggested for providing recommendations. In many cases a system designer that wishes to employ a recommendation system must choose between a set of candidate approaches. A first step towards selecting an appropriate algorithm is to decide which properties of the application to focus upon when making this choice. Indeed, recommendation systems have a variety of properties that may affect user experience, such as accuracy, robustness, scalability, and so forth. In this paper we discuss how to compare recommenders based on a set of properties that are relevant for the application. We focus on comparative studies, where a few algorithms are compared using some evaluation metric, rather than absolute benchmarking of algorithms. We describe experimental settings appropriate for making choices between algorithms. We review three types of experiments, starting with an offline setting, where recommendation approaches are compared without user interaction, then reviewing user studies, where a small group of subjects experiment with the system and report on the experience, and finally describe large scale online experiments, where real user populations interact with the system. In each of these cases we describe types of questions that can be answered, and suggest protocols for experimentation. We also discuss how to draw trustworthy conclusions from the conducted experiments. We then review a large set of properties, and explain how to evaluate systems given relevant properties. We also survey a large set of evaluation metrics in the context of the properties that they evaluate. <s> BIB003 </s> Recommendation Systems for Tourism Based on Social Networks: A Survey <s> What properties of recommender systems are used? <s> Recommendation systems provide focused information to users on a set of objects belonging to a specific domain. The proposed recommender system provides personalized suggestions about touristic points of interest. The system generates recommendations, consisting of touristic places, according to the current position of a tourist and previously collected data describing tourist movements in a touristic location/city. The touristic sites correspond to a set of points of interest identified a priori. We propose several metrics to evaluate both the spatial coverage of the dataset and the quality of recommendations produced. We assess our system on two datasets: a real and a synthetic one. Results show that our solution is a viable one. <s> BIB004 </s> Recommendation Systems for Tourism Based on Social Networks: A Survey <s> What properties of recommender systems are used? <s> The paper proposes a description of information decision support system in the tourism domain and a set of methods and algorithms for generating recommendations for a user that allow significant increase of the system usability. The system generates for the user recommendations which attractions at the moment are better to attend based on the user preferences and the current situation in the location area. The system also allows showing the user information about interesting attraction in more detail, which is based on analyzing information evaluations made by other users. <s> BIB005 </s> Recommendation Systems for Tourism Based on Social Networks: A Survey <s> What properties of recommender systems are used? <s> In recent years, million geo-tagged photos are available in online web service like Flickr, panoramio, etc. People contributing geo-tagged photo and share their travel experiences these media. The photo itself has important information sharing reveals like location, time, tags, title, and weather. We recommend the new method locations travel for tourists according their time and their preference. We get travel user preference according his/her past time in one city and recommendation another city. We examine our technique collect dataset from Flickr publically available and taken different cities of china. Experiment results show that our travel recommendation method according to tourist time capable to predict tourist location recommendation famous places or new places more precise and give better recommendation compare to state of art landmarks recommendation method and personalized travel method. <s> BIB006
|
We surveyed a range of properties that are commonly considered when deciding which recommendation approach to select. As different applications have different needs, it must be decided which properties are important to pursue the specific application at hand. In this survey, we have identified the following properties: accuracy, coverage, confidence, trust, novelty, serendipity, diversity, utility, robustness and scalability, as defined by and shown in the third column of Table 2 . As expected, accuracy, which is one of the most fundamental measures through which RSs are evaluated. The main components of accuracy evaluation are: designing the accuracy evaluation; and accuracy metrics (accuracy of estimating ratings and accuracy of estimating rankings). In summary, accuracy is able to tell if the RS is able to predict those items that you have already rated or interacted with, thus RSs which optimize accuracy will naturally place those items at the top of a user's list, is found in almost all the projects analysed (77% of them). The second most-seeked property is confidence, that can stem from available numerical values that describe the frequency of actions, i.e. how much time the user watched a certain show or how frequently a user bought a certain item. These numerical values indicate the confidence in each observation. Various factors that have nothing to do with user preferences might cause a one-time event; however, a recurring event is more likely to reflect user opinion BIB001 . That is, a confidence measure is important as it can help users decide which movies to watch, products to buy, and also help an e-commerce site in making a decision on which recommendations should not be displayed, because an erratic recommendation can diminish the trust of users in the system ). In contrast, some projects concentrated in developing a recommender with the focus on a less "popular" property, such as BIB006 , oriented in improving scalability, that can be understood as the ability of the system to process an increasing amount of work with respect to a desirable performance metric, for example the predictive accuracy of the system . The importance of scalability has become particularly great in recent years because of the increasing importance of the "big-data" paradigm. A variety of measures are used for determining the scalability of a system: training time (Most RSs require a training phase, which is separate from the testing phase), prediction time (Once a model has been trained, it is used to determine the top recommendations for a particular customer), memory requirements (When the rating matrices are large, it is sometimes a challenge to hold the entire matrix in the main memory) . Or BIB005 , centred in guaranteeing robustness that means, an RS is stable and robust when the recommendations are not significantly affected in the presence of attacks such as fake ratings or when the patterns in the data evolve significantly over time. In general, significant profit-driven motivations exist for some users to enter fake ratings, for instance, the author or publisher of a book might enter fake positive ratings about a book at Amazon.com, or they might enter fake negative ratings about the books of a rival. In many cases, several properties are pursued. For instance, (Lemos et al, 2010) tried to improve both accuracy and confidence, to make satisfactory recommendations of georeferenced photos without prior knowledge of the user profile, considering only its current context; Also to analyse, the context in which the photos were taken is relevant in making recommendations; and the usage of a context model considering various contextual dimensions may lead to an improved recommendation comparing to the result of one which uses only one context attribute (e.g., location). Others combined accuracy with coverage BIB004 BIB002 , that is, even when an RSs is highly accurate, it may often not be able to ever recommend a certain proportion of the items, or it may not be able to ever recommend to a certain proportion of the users (this measure is referred to as coverage). Due to this limitation the trade-off between accuracy and coverage always needs to be incorporated into the evaluation process. There are two types of coverage, which are referred to as user-space coverage and item-space coverage, respectively. Some of the properties can be traded-off, for instance, perhaps the decline in accuracy may imply that other properties (e.g. diversity) are improved. Besides, while we can certainly speculate that users would like diverse recommendations or reported confidence bounds, it is essential to show that this property important in practice. In other words, when suggesting a method that improves one of this properties, one should also evaluate how changes in this property affects the user experience, either through a user study or through online experimentation BIB003 . Overall, independently of the property (or properties) seeked in the several RS that we have reviewed, it is clear that the diversity and quantity of the properties used in the scientific researches is increasing, which demonstrates that those features can improve even more the recommenders, when they are well applied.
|
Recommendation Systems for Tourism Based on Social Networks: A Survey <s> Which type of recommendation is generated? <s> In this paper, we propose a method: Context Rank, which utilizes the vast quantity of geo tagged photos in photo sharing website to recommend travel locations. To enhance the personalized recommendation performance, our method exploits different context information of photos, such as textual tags, geo tags, visual information, and user similarity. Context Rank first detects landmarks from photos' GPS locations, and estimates the popularity of each landmark. Within each landmark, representative photos and tags are extracted. Furthermore, Context Rank calculates the user similarity based on users' travel history. When a user's geo tagged photos are given, the landmark popularity, representative photos and tags, and the user similarity are used to predict the user preference of a landmark from different aspects. Finally a learning to rank algorithm is introduced to combine different preference predictions to give the final recommendation. Experiments performed on a dataset collected from Panoramio show that the Context Rank can obtain a better result than the state-of-the-art method. <s> BIB001 </s> Recommendation Systems for Tourism Based on Social Networks: A Survey <s> Which type of recommendation is generated? <s> Photo sharing platforms users often annotate their trip photos with landmark names. These annotations can be aggregated in order to recommend lists of popular visitor attractions similar to those found in classical tourist guides. However, individual tourist preferences can vary significantly so good recommendations should be tailored to individual tastes. Here we pose this visit personalization as a collaborative filtering problem. We mine the record of visited landmarks exposed in online user data to build a user-user similarity matrix. When a user wants to visit a new destination, a list of potentially interesting visitor attractions is produced based on the experience of like-minded users who already visited that destination. We compare our recommender to a baseline which simulates classical tourist guides on a large sample of Flickr users. <s> BIB002 </s> Recommendation Systems for Tourism Based on Social Networks: A Survey <s> Which type of recommendation is generated? <s> Due to the recent wide spread of camera devices with GPS, the number of geotagged photos on the Web is increasing rapidly. Some image retrieval systems and travel recommendation systems which make use of geotagged images on the Web have been proposed so far. While most of them handle a large number of geotagged images as a set of location points, in this paper we handle them as sequences of location points. We propose a travel route recommendation system which utilizes actual travel paths extracted from a large number of photos uploaded by many people on the Web. <s> BIB003 </s> Recommendation Systems for Tourism Based on Social Networks: A Survey <s> Which type of recommendation is generated? <s> Trip planning is generally a very time-consuming task due to the complex trip requirements and the lack of convenient tools/systems to assist the planning. In this paper, we propose a travel path search system based on geo-tagged photos to facilitate tourists' trip planning, not only for where to visit but also how to visit. The large scale geo-tagged photos that are public ally available on the web make this system possible, as geo-tagged photos encode rich travel-related metadata and can be used to mine travel paths from previous tourists. In this work, about 20 million geo-tagged photos were crawled from Panoramio.com. Then a substantial number of travel paths are minded from the crawled geo-tagged photos. After that, a search system is built to index and search the paths, and the Sparse Chamfer Distance is proposed to measure the similarity of two paths. The search system supports various types of queries, including (1) a destination name, (2) a user-specified region on the map, (3) some user-preferred locations. Based on the search system, users can interact with the system by specifying a region or several interest points on the map to find paths. Extensive experiments show the effectiveness of the proposed framework. <s> BIB004 </s> Recommendation Systems for Tourism Based on Social Networks: A Survey <s> Which type of recommendation is generated? <s> We are developing a recommender system for tourist spots. The challenge is mainly to characterize tourist spots whose features change dynamically with trends, events, season, and time of day. Our method uses a one-class support vector machine (OC-SVM) to detect the regions of substantial activity near target spots on the basis of tweets and photographs that have been explicitly geotagged. A tweet is regarded as explicitly geotagged if the text includes the name of a target spot. A photograph is regarded as explicitly geotagged if the title includes the name of a target spot. To characterize the tourist spots, we focus on geotagged tweets, which are rapidly increasing on the Web. The method takes unknown geotagged tweets originating in activity regions and maps these to target spots. In addition, the method extracts features of the tourist spots on the basis of the mapped tweets. Finally, we demonstrate the effectiveness of our method through qualitative analyses using real datasets on the Kyoto area. <s> BIB005 </s> Recommendation Systems for Tourism Based on Social Networks: A Survey <s> Which type of recommendation is generated? <s> We apply collaborative recommendation algorithms to photography in order to produce personalized suggestions for locations in the geocoordinate space where mobile users can take photos. We base our work on a collection of 3 million geotagged, publicly-available Flickr.com digital photos on which we applied a series of steps: first, unique locations are identified by discretizing the continuous latitude and longitude geocoordinates into geographic virtual bins; second, implicit feedback is calculated in a user x location matrix using normalized frequency; and third, missing feedback values are imputed through four different algorithms (one memory-based and three model-based). Our results show that two of the model-based algorithms produced the best RMSE and that the RMSE is sensitive to increasing hash bin size. <s> BIB006
|
In this survey, we have found that RSs mainly generate three types of outputs: places/points of interest (POIs) such as monuments, churches, museums, etc.; tourist routes inside or outside the cities (route); and basic information or instructions of a tour, mountain walks, schedules (guide). The output of each analysed project can be observed in the fifth column of Table 2 . Firstly, the most common output are POIs, which represent 61% of the projects analysed. BIB001 for instance, proposed a new method called ContextRank, which exploits different context information of photos to recommend personalised tourism POIs. Their architecture first detects landmarks from geotagged photos and estimates their popularity; then, by analysing the photos and their textual tags, only the representative ones are extracted for each landmark. It calculates user similarity from users' travel histories with all this contextual information, predicts a user's preference score in a landmark from different aspects, and combines these scores to give the final recommendation of POIs with their proposed algorithm, called ContexRank. Another example of POIs recommendation is , whose project generates recommendations based on visual matching and minimal user input, by creating clusters of geotagged images and then recommending those POIs matching a query input by the user describing her preferred destinations. Another one is presented by BIB005 , that proposed a method for mapping geotagged tweets to POIs on the basis of the substantial activity regions of the POIs as learned using one-class support vector machine. We also highlight BIB006 , that applies collaborative recommendation algorithms to geotagged photos in order to produce personalised suggestions for POIs in the geocoordinate space. They used a collection of 3 million Flickr geotagged photos on which a series of steps was applied: first, unique locations were identified by discretizing the continuous geocoordinates into geographic virtual bins; second, implicit feedback was calculated in a user/location matrix using normalised frequency; and third, missing feedback values were imputed through four different algorithms. Secondly, we find that 26% of projects recommend routes, among which we highlight three works. The first one is presented by (Sun et al, 2013) , who developed a travel recommendation approach integrating landmark and routing. The routing is generated based on the Dijkstra algorithm, combined with spatial clustering of images. The second one is presented by BIB003 , which proposes a travel route RS based on sequences of geotagged photos. The authors explain that the online processing of the system consists of the following steps: selection of tourist places that a user would like to visit; presentation of travel route candidates; and presentation of the selected travel route on a map. The third one, unlike the two projects previously mentioned, BIB004 , developed not a recommendation of routes, but of pedestrian tracks of paths (remind that a path can be, for instance, "pedestrian path" in open areas without pre-established paths, such as a large garden), in this case for the Forbidden City in China, helping users to plan trips. As an output, their recommender also shows some features like the distribution of the visit duration along with the path. Another feature is the popularity of a destination by the total number of paths of the destination; with this popularity, the system can recommend what the hottest destinations are, in terms of seasons or months, thus being able to tell users whether March or October is the best travel time, for instance. Finally, we found systems recommending guide in 13% of projects. An example was the project developed by BIB002 ; according to the authors, classical tourist guides are usually organised around landmark popularity and fail to account for each visitor's preferences. Considering this issue, this project introduced techniques like collaborative filtering for personalising the visit guides, based on one's tagging record and on the discovery of users with similar preferences.
|
Recommendation Systems for Tourism Based on Social Networks: A Survey <s> Which evaluation methods are used? <s> We propose a method to predict a user's favourite locations in a city, based on his Flickr geotags in other cities. We define a similarity between the geotag distributions of two users based on a Gaussian kernel convolution. The geotags of the most similar users are then combined to rerank the popular locations in the target city personalised for this user. We show that this method can give personalised travel recommendations for users with a clear preference for a specific type of landmark. <s> BIB001 </s> Recommendation Systems for Tourism Based on Social Networks: A Survey <s> Which evaluation methods are used? <s> This paper presents a field study of a framework for personalized mobile recommendations in the tourism domain, of sight-seeing Points of Interest (POI). We evaluate the effectiveness, satisfaction and divergence from popularity of a knowledge-based personalization strategy comparing it to recommending most popular sites. We found that participants visited more of the recommended POIs for lists with popular but non-personalized recommendations. In contrast, the personalized recommendations led participants to visit more POIs overall and visit places "off the beaten track". The level of satisfaction between the two conditions was comparable and high, suggesting that our participants were just as happy with the rarer, "off the beaten track" recommendations and their overall experience. We conclude that personalized recommendations set tourists into a discovery mode with an increased chance for serendipitous findings, in particular for returning tourists. <s> BIB002 </s> Recommendation Systems for Tourism Based on Social Networks: A Survey <s> Which evaluation methods are used? <s> This paper presents an overview of user studies in the Music Information Retrieval (MIR) literature. A focus on the user has repeatedly been identified as a key requirement for future MIR research; yet empirical user studies have been relatively sparse in the literature, the overwhelming research attention in MIR remaining systems-focused. We present research topics, methodologies, and design implications covered in the user studies conducted thus far. <s> BIB003 </s> Recommendation Systems for Tourism Based on Social Networks: A Survey <s> Which evaluation methods are used? <s> In this paper, we propose a method: Context Rank, which utilizes the vast quantity of geo tagged photos in photo sharing website to recommend travel locations. To enhance the personalized recommendation performance, our method exploits different context information of photos, such as textual tags, geo tags, visual information, and user similarity. Context Rank first detects landmarks from photos' GPS locations, and estimates the popularity of each landmark. Within each landmark, representative photos and tags are extracted. Furthermore, Context Rank calculates the user similarity based on users' travel history. When a user's geo tagged photos are given, the landmark popularity, representative photos and tags, and the user similarity are used to predict the user preference of a landmark from different aspects. Finally a learning to rank algorithm is introduced to combine different preference predictions to give the final recommendation. Experiments performed on a dataset collected from Panoramio show that the Context Rank can obtain a better result than the state-of-the-art method. <s> BIB004 </s> Recommendation Systems for Tourism Based on Social Networks: A Survey <s> Which evaluation methods are used? <s> This paper presents Near2me, a prototype system implementing a travel recommender concept that generates recommendations that are not only personalized, but also authentic. Exploitation of implicit situational knowledge makes it possible for Near2me to recommend places that are not necessarily touristic or famous, but rather are genuinely representative of place and also match users' personal interests. The system allows users to explore, evaluate, and understand recommendations, control recommendation direction and discover informative supporting material. This functionality makes it possible for users to assess recommendations and confirm their suitability and authentic nature. The recommendation system makes use of user photos from the image sharing community Flickr. We take the position that a social media-based environment incorporating multimedia content items, user-contributed annotations and social network connections is uniquely suited to providing users with authentic, personalized recommendations. First results of a user study allow us to conclude that users are interested in exploring locations, topics, and people from different perspectives and confirm authenticity as a relevance criterion. <s> BIB005 </s> Recommendation Systems for Tourism Based on Social Networks: A Survey <s> Which evaluation methods are used? <s> Recommendation systems provide focused information to users on a set of objects belonging to a specific domain. The proposed recommender system provides personalized suggestions about touristic points of interest. The system generates recommendations, consisting of touristic places, according to the current position of a tourist and previously collected data describing tourist movements in a touristic location/city. The touristic sites correspond to a set of points of interest identified a priori. We propose several metrics to evaluate both the spatial coverage of the dataset and the quality of recommendations produced. We assess our system on two datasets: a real and a synthetic one. Results show that our solution is a viable one. <s> BIB006 </s> Recommendation Systems for Tourism Based on Social Networks: A Survey <s> Which evaluation methods are used? <s> Offline evaluations are the most common evaluation method for research paper recommender systems. However, no thorough discussion on the appropriateness of offline evaluations has taken place, despite some voiced criticism. We conducted a study in which we evaluated various recommendation approaches with both offline and online evaluations. We found that results of offline and online evaluations often contradict each other. We discuss this finding in detail and conclude that offline evaluations may be inappropriate for evaluating research paper recommender systems, in many settings. <s> BIB007 </s> Recommendation Systems for Tourism Based on Social Networks: A Survey <s> Which evaluation methods are used? <s> We are developing a recommender system for tourist spots. The challenge is mainly to characterize tourist spots whose features change dynamically with trends, events, season, and time of day. Our method uses a one-class support vector machine (OC-SVM) to detect the regions of substantial activity near target spots on the basis of tweets and photographs that have been explicitly geotagged. A tweet is regarded as explicitly geotagged if the text includes the name of a target spot. A photograph is regarded as explicitly geotagged if the title includes the name of a target spot. To characterize the tourist spots, we focus on geotagged tweets, which are rapidly increasing on the Web. The method takes unknown geotagged tweets originating in activity regions and maps these to target spots. In addition, the method extracts features of the tourist spots on the basis of the mapped tweets. Finally, we demonstrate the effectiveness of our method through qualitative analyses using real datasets on the Kyoto area. <s> BIB008 </s> Recommendation Systems for Tourism Based on Social Networks: A Survey <s> Which evaluation methods are used? <s> In recent years, million geo-tagged photos are available in online web service like Flickr, panoramio, etc. People contributing geo-tagged photo and share their travel experiences these media. The photo itself has important information sharing reveals like location, time, tags, title, and weather. We recommend the new method locations travel for tourists according their time and their preference. We get travel user preference according his/her past time in one city and recommendation another city. We examine our technique collect dataset from Flickr publically available and taken different cities of china. Experiment results show that our travel recommendation method according to tourist time capable to predict tourist location recommendation famous places or new places more precise and give better recommendation compare to state of art landmarks recommendation method and personalized travel method. <s> BIB009 </s> Recommendation Systems for Tourism Based on Social Networks: A Survey <s> Which evaluation methods are used? <s> In many real-world reinforcement learning problems, we have access to an existing dataset and would like to use it to evaluate various learning approaches. Typically, one would prefer not to deploy a fixed policy, but rather an algorithm that learns to improve its behavior as it gains more experience. Therefore, we seek to evaluate how a proposed algorithm learns in our environment, meaning we need to evaluate how an algorithm would have gathered experience if it were run online. In this work, we develop three new evaluation approaches which guarantee that, given some history, algorithms are fed samples from the distribution that they would have encountered if they were run online. Additionally, we are the first to propose an approach that is provably unbiased given finite data, eliminating bias due to the length of the evaluation. Finally, we compare the sample-efficiency of these approaches on multiple datasets, including one from a real-world deployment of an educational game. <s> BIB010
|
In this section we classify the projects analysed in two possible evaluation methods, online or offline, presented by BIB007 Beel and Langer, 2015; BIB010 . On the one hand, the online evaluations, recommendations are shown to real users of the system during their section, that is, the process of evaluating a system is generated with the active and direct participation of the users, where the investigator obtains real feedback from them. On the other hand, offline evaluations use precompiled offline datasets from which some information has been removed, in other words, the process of evaluating a system is not developed with the active and direct participation of users, but rather, they can use data from users (real data), or not (synthetic data). Subsequently, the recommender algorithms are analysed on their ability to recommend the missing information BIB007 . According to BIB003 , although the number of studies that use users has increased, the conducting such studies on real-world remains time-consuming and expensive, particularly for academic researchers. Consequently, relatively few studies measuring aspects related to user satisfaction have been published . In one hand, from all the papers analysed in this research (column 3 in Table 2 ), 84% have evaluated their systems using the offline method, such as BIB009 . They used a sample of a dataset from Flickr with 1,376,886 photographs with their spatial and temporal context, and cleaned these photos' data, removing two types of photos from dataset: photos that were collected in the result of search based on text containing name of a city in their metadata and photos with incorrect temporal context. Then, they applied the density-based clustering algorithm to geo-tags associated with photos. This way, they compared some methods like, popularity rank, collaborative filtering rank, classic rank and, recommend popular places, to show the effectiveness of context ranking, which is his propose. With this method of evaluation, Memon demonstrated that his project is able to predict tourist's preferences in a new city more precisely and generate better recommendations as compared to other recommendation methods. Using no one, but three different datasets, tourist spots (Foursquare), geotagged tweets (Twitter), and geotagged photographs (Panoramio), BIB008 conducted qualitative analyses in order to evaluate the effectiveness of the proposed methods (mapping geotagged tweets to tourist spot and extracts features of the tourist spot). Thus, he showed the effectiveness of the methods through qualitative analyses. Another example was proposed by BIB004 , a method named ContextRank that used a dataset from Panoramio, containing approximately 15 million of geotagged photos. For each landmark, he choose 10 representative photos by clustering. In his offline evaluation, he compared his method to scale space representation of all the geotags proposed in BIB001 . His results showed that different kinds of context information can help to enhance the recommendation performance when a user is lack of travel history. Unlike previous works, BIB006 , to build a knowledge model, chose to measure the effectiveness and the efficiency of the proposed solution using two trajectory sets: synthetic and real data. In relation to the offline real dataset, it was made up of data coming from Flickr, where the trajectories are built using users' photos. On the other hand, the offline synthetic dataset was generated using a trajectory generator for a specific geographic area. It takes as input a dataset of POIs, which are combined in sequences that form trajectories. In this way, this project was able to perform two evaluations: (1) the quality of the trajectory set, adopting spatial coverage, data coverage, region separation and rate; and (2) the effectiveness and efficiency adopting the prediction rate, accuracy, average error and omega. The results showed that this project is able to generate suggestions of potential POIs, depending on the current position of a tourist, and a set of trajectories describing the paths previously made by other tourists. On the other hand, only 16% of the projects have submitted their projects to an assessment by real users. Even those counted on a very low amount of users. Using 21 participants represented in 8 different countries, BIB002 developed an online evaluation to see the effect of personalisation on the behaviour of participants. Besides to use a questionnaire to get travel habits of the participants who started travelling in the past, they had also completed parallel data collection tasks. Then, a list of recommendations of POIs was generated. These lists were either personalised or based on popularity, but both consisted of precisely five POIs given the limited time available for sightseeing. These participants received a list of recommended points of interest to say how much they liked of each POI on a scale from 1 to 7 (1=not at all, 7=a lot). Although some participants did not follow many of the recommendations in the personalised lists, the author found that personalised recommendations enabled a "discovery mode", that is, participants visited more POIs than in the popular condition, and these POIs were also rarer than POIs visited by participants in the popular condition. Thus, this project showed that personalised recommendations may increase serendipity since users are more likely to discover sites that surpass their a priori assessment. In the project called MMedia2U developed by (Lemos et al, 2010) , a group of 13 users evaluated photos from 8 different contexts, each one consisting of a stage of evaluation. Lemos pointed out in his project that an online evaluation of an RS is a hard task, due to the fact that item's relevancy has a strongly personal nature and it is complex to be measured. This difficulty is enhanced when existing a lack of historical evaluation data, which makes large-scale studies very costly and difficult to be run. In his case, the complexity is even bigger, since his project needs to range the possible contexts of real situations. In each stage of his evaluation, one context (approximately 100 photos, 20 were taken in similar contexts to the one showed to the user and 80 were different in some dimensions of the context) was presented to the user. The volunteers had to visualize a set of photos and choose those that seemed to be more appealing to him/her, taking into consideration the context he/she suggested. And then, the degree of success on recommendations was then evaluated by the ratio of chosen photos. In general, the results of this project concluded that, for the data used, context-awareness can bring gains in the photo recommendation compared to a random list. In the case of BIB005 , 12 volunteers participated in the user-oriented evaluation of the prototypical implementation of Near2me. This project focused in discovering: how Near2me is perceived by users in general and how users interact with the system; how are the individual components used to contribute to the users' satisfaction with the system; and finally, how the interplay of the components used convey authenticity and personalisation to the user. The evaluation consisted of a task-directed walkthrough of the interface carried out on the working prototype. During the evaluation, the subjects were asked to use the Near2me prototype to plan a possible trip to Paris and were left free to interact with the prototype for a maximum time of 30 minutes. While performing the task, the participants were asked to speak aloud, giving insights about the motivations behind each action, the possible expectations about the foreseen outputs, and the satisfaction towards the actual recommendation and interaction paradigm. The subjects were observed, most relevant comments and behaviours were noted, and each session was recorded using both a video camera and screencast software. After the walkthrough, information was obtained from the participants through semistructured interviews. A question framework based on the research questions guided the interviews. This framework was adapted for each participant according to her vocabulary and the notes were taken during observation allowed for exploring and confirming the participant's feedback. This evaluation showed that the participants are interested in three perspectives: locations, topics, and experts.
|
Recommendation Systems for Tourism Based on Social Networks: A Survey <s> What type of interface is used? <s> Trip planning is generally a very time-consuming task due to the complex trip requirements and the lack of convenient tools/systems to assist the planning. In this paper, we propose a travel path search system based on geo-tagged photos to facilitate tourists' trip planning, not only for where to visit but also how to visit. The large scale geo-tagged photos that are public ally available on the web make this system possible, as geo-tagged photos encode rich travel-related metadata and can be used to mine travel paths from previous tourists. In this work, about 20 million geo-tagged photos were crawled from Panoramio.com. Then a substantial number of travel paths are minded from the crawled geo-tagged photos. After that, a search system is built to index and search the paths, and the Sparse Chamfer Distance is proposed to measure the similarity of two paths. The search system supports various types of queries, including (1) a destination name, (2) a user-specified region on the map, (3) some user-preferred locations. Based on the search system, users can interact with the system by specifying a region or several interest points on the map to find paths. Extensive experiments show the effectiveness of the proposed framework. <s> BIB001 </s> Recommendation Systems for Tourism Based on Social Networks: A Survey <s> What type of interface is used? <s> The paper proposes a description of information decision support system in the tourism domain and a set of methods and algorithms for generating recommendations for a user that allow significant increase of the system usability. The system generates for the user recommendations which attractions at the moment are better to attend based on the user preferences and the current situation in the location area. The system also allows showing the user information about interesting attraction in more detail, which is based on analyzing information evaluations made by other users. <s> BIB002 </s> Recommendation Systems for Tourism Based on Social Networks: A Survey <s> What type of interface is used? <s> Travel recommendation systems can tackle the problem of information overload and recommend proper attractions on the basis of users' preferences. Most existing travel recommendation systems utilized travel history, yet neglected the low frequency of tourism and the flexible styles of attractions in different cities, which will cause the inaccuracy in both collaborative filtering recommendation and content-based recommendation. To deal with this issue, we propose a novel personalized travel recommendation framework by leveraging explicit user interaction and multi-modality travel information. As far as we known, it is the first time that attractions are recommended by user interaction and collective intelligence in a unified framework. Specifically, we first collect heterogeneous travel information by multi-user sharing, which is regarded as collective intelligence to provide reliable references by other travelers. Second, valuable knowledge is mined from collective intelligence in order to filter out the noisy data and make travel information structured. Then, personalized attraction similarity (PAS) model is designed to suggest attractions through fusing heterogeneous information with weighted adaptation and simultaneously considering explicit user interaction. Finally, context information such as the user's location is well adopted to refine the recommendation that may influence the user's choice at a particular moment. Experimental results on pseudo-relevance data and real-world data demonstrate that our method gains promising performance in terms of effectiveness as well as efficiency. HighlightsThis paper proposes a framework of personalized attraction recommendation in tourism.It mines collective intelligence from heterogeneous travel multimedia on social media.PAS-model is employed to recommend similar attractions with explicit interaction.We can fuse heterogeneous collective intelligence with weight-adaptation.Context information is considered to refine the final results for freshness and surprise. <s> BIB003
|
In our survey, we have found projects that use an interface based on mobile phones, based on web or without any interface at all. Specifically, from 18 papers that provided an interface, 12 of them (67%) were web-oriented and the remaining (33%) were mobile-oriented. These are detailed in column 7 (Table 2) . We did not find any desktop-oriented application. An example of an RS with an interface for Android mobile phones is the app TAIS (Tourist Assistant) developed by BIB002 . The main application screen is shown in Figure 4 (left). The tourist can see images extracted from accessible internet sources, a clickable map with his/her location, current weather, and the attractions around ranked by the recommendation service. When the tourist clicks on an attraction, a context menu shows detailed information about the chosen attraction (Figure 4 right) . We also show some details of the web-oriented project presented by BIB001 , which, unlike the other projects, makes a recommendation of not only where to visit but also how to visit, that is, it makes a recommendation of "path" alongside with high-quality photos taken in this destination. In Figure 5 , we see an example of the results obtained after a user inputs a destination name and then get the recommended paths within the query destination, in this case "the Forbidden City". A web-oriented project was introduced in BIB003 . Figure 6 shows a visual example of the personalised travel recommendation. The system can collect the current location and show the located city on the map with high-quality photos taked in that destination are also shown to users. Also, the user can input their favourite and non-favourite attractions on the right side of the interface. If the user does not wish to interact with the system, the system will show them the results which are ranked by popularity, to avoid the cold-start problem.
|
Recommendation Systems for Tourism Based on Social Networks: A Survey <s> Discussion <s> It is well known that utterances convey a great deal of information about the speaker in addition to their semantic content. One such type of information consists of cues to the speaker's personality traits, the most fundamental dimension of variation between humans. Recent work explores the automatic detection of other types of pragmatic variation in text and conversation, such as emotion, deception, speaker charisma, dominance, point of view, subjectivity, opinion and sentiment. Personality affects these other aspects of linguistic production, and thus personality recognition may be useful for these tasks, in addition to many other potential applications. However, to date, there is little work on the automatic recognition of personality traits. This article reports experimental results for recognition of all Big Five personality traits, in both conversation and text, utilising both self and observer ratings of personality. While other work reports classification results, we experiment with classification, regression and ranking models. For each model, we analyse the effect of different feature sets on accuracy. Results show that for some traits, any type of statistical model performs significantly better than the baseline, but ranking models perform best overall. We also present an experiment suggesting that ranking models are more accurate than multi-class classifiers for modelling personality. In addition, recognition models trained on observed personality perform better than models trained using self-reports, and the optimal feature set depends on the personality trait. A qualitative analysis of the learned models confirms previous findings linking language and personality, while revealing many new linguistic markers. <s> BIB001 </s> Recommendation Systems for Tourism Based on Social Networks: A Survey <s> Discussion <s> This study aimed at relating tourists’ Internet behaviours and the Big Five Factors (BFF) of personality to identify personality items that better predict tourists’ Internet behaviours. Survey data from 288 domestic tourists to Busan, South Korea, was used to empirically examine the relationship between the BFF and Internet behaviours. Results indicate that Internet travel information sources vary with the BFF with the exception of extraversion, and the Internet channels used for travel information search also varied with the BFF with the exception of conscientiousness. The Internet is more widely used as a source of travel information but less for travel purchases. The results also suggest that the responses to some BFF items can substantially improve the predictability of tourists’ Internet behaviours. Implications for the use of the BFF in designing travel information systems are addressed. <s> BIB002 </s> Recommendation Systems for Tourism Based on Social Networks: A Survey <s> Discussion <s> The existing approaches for enhancing diversity in online recommendations neglect the user's spontaneous needs that might be potentially influenced by her/his personality. In this paper, we report our ongoing research on exploring the actual impact of personality values on users' needs for recommendation diversity. The results from a preliminary user survey are reported, that show the significantly causal relationship from personality factors (such as conscientiousness) to the users' diversity preference (not only over the item's individual attributes but also on all attributes when they are combined). We further present our plan for the follow-up work and discuss its practical implications. <s> BIB003 </s> Recommendation Systems for Tourism Based on Social Networks: A Survey <s> Discussion <s> In recent years there has been an exponential increase in the number of users each day adopting e-commerce as a purchasing vehicle of products and services. This has led to a growing interest from the scientific community in approaches and models that would improve the customer experience. Specifically, it has been repeatedly pointed out that the definition of a customer experience tailored to the user personality traits would likely increase the probability of purchase. In this article we illustrate a recommender system for e-commerce capable of adapting the product and service offer according to not only the user interests and preferences, and his context of use, but also his personality profile derived from information relating to his professional activities. <s> BIB004 </s> Recommendation Systems for Tourism Based on Social Networks: A Survey <s> Discussion <s> Adaptive applications may benefit from having models of [email protected]? personality to adapt their behavior accordingly. There is a wide variety of domains in which this can be useful, i.e., assistive technologies, e-learning, e-commerce, health care or recommender systems, among others. The most commonly used procedure to obtain the user personality consists of asking the user to fill in questionnaires. However, on one hand, it would be desirable to obtain the user personality as unobtrusively as possible, yet without compromising the reliability of the model built. On the other hand, our hypothesis is that users with similar personality are expected to show common behavioral patterns when interacting through virtual social networks, and that these patterns can be mined in order to predict the tendency of a user personality. With the goal of inferring personality from the analysis of user interactions within social networks, we have developed TP2010, a Facebook application. It has been used to collect information about the personality traits of more than 20,000 users, along with their interactions within Facebook. Based on all the collected data, automatic classifiers were trained by using different machine-learning techniques, with the purpose of looking for interaction patterns that provide information about the [email protected]? personality traits. These classifiers are able to predict user personality starting from parameters related to user interactions, such as the number of friends or the number of wall posts. The results show that the classifiers have a high level of accuracy, making the proposed approach a reliable method for predicting the user personality <s> BIB005 </s> Recommendation Systems for Tourism Based on Social Networks: A Survey <s> Discussion <s> Recommender Systems (RSs) are software tools and techniques that provide suggestions for items that are most likely of interest to a particular user. In this introductory chapter, we briefly discuss basic RS ideas and concepts. Our main goal is to delineate, in a coherent and structured way, the chapters included in this handbook. Additionally, we aim to help the reader navigate the rich and detailed content that this handbook offers. <s> BIB006 </s> Recommendation Systems for Tourism Based on Social Networks: A Survey <s> Discussion <s> The research work of the third author is partially funded by the WIQ-EI (IRSES grant n. 269180) and DIANA APPLICATIONS (TIN2012-38603-C02-01), and done in the framework of the VLC/Campus Microcluster on Multimodal Interaction in Intelligent Systems. <s> BIB007 </s> Recommendation Systems for Tourism Based on Social Networks: A Survey <s> Discussion <s> This paper proposes the development of an Agent framework for tourism recommender system. The recommender system can be featured as an online web application which is capable of generating a personalized list of preference attractions for tourists. Traditional technologies of classical recommender system application domains, such as collaborative filtering, content-based filtering and content-based filtering are effectively adopted in the framework. In the framework they are constructed as Agents that can generate recommendations respectively. Recommender Agent can generate recommender information by integrating the recommendations of Content-based Agent, collaborative filtering-based Agent and constraint-based Agent. In order to make the performance more effective, linear combination method of data fusion is applied. User interface is provided by the tourist Agent in form of webpages and mobile app. <s> BIB008 </s> Recommendation Systems for Tourism Based on Social Networks: A Survey <s> Discussion <s> In recent years, an evolution of recommendation systems has been observed. New advisory systems have been gaining space, using new tools, algorithms and recommending techniques, not only to increase accuracy, but also to positively surprise the user, that is, to provide serendipitous recommendations. Following this trend, we describe CURUMIM, an online tourism recommender system, able to generate serendipitous recommendations of places around the world. In summary, from data available on social networks, it predicts the degree of curiosity of a user, which is then used, along with the user history and her level of education, to select the most appropriate recommendations. We have performed an experiment with real users who reported positive levels of satisfaction with the recommendations in terms of accuracy, serendipity and novelty. <s> BIB009
|
As we showed in this article, the combination of RSs and SNs is obtaining better results and, indirectly, enhancing the tourism sector's economy BIB008 . It is crucial, since the application of RSs in such a customer sensitive sector has become a necessity, not a luxury; moreover, RSs have great value because they assist all parts of the tourism value chain. On one side, they support better and faster decisions when the customer is choosing a destination, and help them to plan holidays according to their needs, improving the overall service offered. On the other side, they also offer considerable benefits for service providers such as hotels, restaurants or cultural event organisers, improving their online presence, increasing sales, and reducing costs for advertising activities . This way, the extent of projects that use SNs in their RSs keeps growing, as well as the volume of data generated in those environments, as shown in Fig. 2 , thus progressively influencing tourists around the world. The RSs are deeply changing the way tourists search, find, read and trust when choosing a destination. On the other hand, people through SNs create and share content related to everything, from travel agencies to relevant information about a certain POI. However, the increment in the academic research production can be affected by some relevant challenges, from which the main is to get access to the data from SNs. In 2018, Facebook, for instance, announced dramatic data access restrictions on its app and website in response to the public outcry following the Cambridge Analytica scandal . This decision made it virtually impossible to carry out large-scale research on Facebook. The changes make extinct software and libraries dedicated to academic research on Facebook, including Netvizz, NodeXL, SocialMediaLab, fb scrape public and Rfacebook, all of which relied on Facebook's APIs to collect data. In the case of Twitter, it operates three well documented public APIs, in addition to its premium and enterprise offerings. Twitter's relative accessibility leads it to be vastly overrepresented in social media research. But public and open APIs are an exception in the social media ecosystem. Facebook's Public Feed API, for example, is restricted to a limited set of media publishers. Due to the increasing data restriction on the part of the large companies such as Twitter , Instagram , and Facebook , some campaigns and initiatives pro data sharing have gaining adepts in the scientific environment. The idea of one of those projects, known as "Open Data" 6 is that the data be available for everyone, without restrictions, and can be freely used, reused and redistributed by anyone, meeting the requirement of mentioning the original source and sharing under the same licenses in which that information was collected. In other words, the goal of the open data movement is similar to others such as open source, open content and open access. We believe that data sharing, whether from SNs, public or private bodies, is extremely relevant for the researchers in all areas of knowledge. In the case of public bodies, the Ministers of Science of all nations belonging to the Organisation for Economic Co-operation and Development (OECD 7 ) signed a statement in 2004 saying that, basically, all archive data publicly funded must be accessible for the public. With respect to the data available online like in SNs, future researches would have to deal with an increasingly sensitive and troubling phenomenon, the privacy and the use of the data, among other reasons because they have stored very intimate data. Recently, we can observe two simultaneous scenarios: the SNs that provide APIs to the data access and analysis; and the SNs that suppress it, such as Facebook as we have already mentioned. According to (Bastos and Walker, 2018) , the aforementioned data restriction, which causes a differentiation between public and "premium" versions, will widen the gap between industry researchers hired by SNs and researchers working outside of corporations. In spite of such restrictions, there are large databases available for research purposes, which could be used on projects that seek an offline evaluation to measure their accuracy, for instance. Some examples of those databases are Open Data, Stanford Large Network Dataset Collection, UCI Network Data Repository, Interesting Social Media Datasets, Network data, and Kevin Chai's. Throughout this paper, we tried to disclose and clarify some theoretical and technical topics in the development of a recommender, by analysing the projects of recommendation systems since mid-2004. We presented a summary of the basic recommendation techniques, an overview of what SNs are about, their benefits, and their importance to the recommendations projects. Then, we ordered (by date of publication) the main works of the last 10 years about RSs in the tourism sector that make use of SNs and classify them into categories such as: SNs and online databases used, items extracted from these sites, evaluation techniques applied, general goals in evaluating, display and interface. Overall, we observed that RSs are diversifying their data source, consequently adding more complexity in its ability to interpret and predict the customer interests. There are still many researches that use a single data source (e.g. Flickr), which retrieve data considered basic (e.g. age, gender, marital status, number of children, etc.), and seek only the accuracy improvement by means of basic techniques such as CB and CF to generate POIs recommendations. But then, recent investigations started to use more complex data (e.g. correlations between network contacts in a SN, behaviour, texts, photos, etc.) from multi data sources (e.g. Facebook + Wikipedia + TripAdvisor) and different properties (e.g. novelty, serendipity, diversity), increasing the variety of assessments, mainly thanks to machine learning. We also consider that the use of SNs (also known social-based RSs) can indirectly solve or at least mitigate some well-known issues of recommenders, such as the problem of (1) the new user/item, known as the cold start problem; (2) sparsity or ratio diffusion; (3) compilation of demographic information; (4) Portfolio effect; (5) recommendations with excessive results; (6) serendipity BIB006 Tavakolifard, 2012) ,as well as, to improve the quality of recommendations in the tourism context . The cold start problem (1) appears with new users/items, i.e., a system is not capable of recommending an item with an acceptable accuracy until the user has rated enough items. By using SNs, this problem can be mitigated, since it is possible to retrieve "likes", comments, and reviews made by the user in one or more SN. Similarly, there is the new element problem, in which a new item is not recommended until a considerable number of users have rated it, so the probability of the system recommending such item is low. To get around this problem, first, the POIs ratings could be retrieved from different SNs such as Facebook, Flickr, TripAdvisor or Google Maps. Secondly, those POIs with no enough reviews or comments can be of interest for people who like exotic, isolated or less known places; thus, if the system is able to detect those profiles, it would be able to recommend them those places. The sparsity or ratio diffusion problem (2) occurs when there are few or no user ratios that match each other, thus there would be few users to compare with or few similar elements to look for. This problem is commonly found in CB and CF RSs. In this context, SNs play a crucial role due to its large extent of user profiles available, which could minimise or even neutralise such problem. The compilation of demographic information (3) refers to the lack of information related to where people reside or is currently located. Sometimes, a user can be reticent in providing information to a new system, whether due to their privacy concerning, whether due to lack of trust in the service. The use of data already shared on SNs, also used to retrieve such kind of information in a non-intrusive way, could solve this problem. Portfolio effect (4) is regarding the recommendation of an item very similar to another item that the user already has in her history. In the case of tourism, an RS that previously knows the places the user has visited through information posted on their SN could then avoid recommending places of similar categories and locations. The recommendations with poor or excessive results (5) can overwhelm the user. In order to reduce or specialise the items recommended, additional properties could be applied to the RSs, such as novelty, diversity, serendipity, utility, etc. A good number of those properties could be based on the personality predicted using data available on SNs, following some already existent psychological theories. For instance, the system could recommend useful POIs in a reduced quantity when considering the curiosity, that means, the higher the degree of curiosity, the lower the popularity of the POI, and vice-versa BIB009 . One of the keys of the serendipity (6) may be the prediction of an individual's personality. By using data from SNs, such prediction could be easily achieved, thus the RSs would be able to positively surprise the user by recommending items that really match the user's interests. Regarding the adoption of real users to assess the recommenders developed, it is worthwhile stressing its importance when measuring the quality of a system. It is highly abstract to build a system that generates positive surprisingly (serendipitous) recommendations without the cooperation of a human being since each has unique tastes, and the same item may be relevant for one individual but not for another. In short, the researcher needs to understand the response of the user to the delimited parameter, which is not feasible in an offline environment. In spite of the advantages and facilities that offline tests offer to the researcher, we believe that a recommendation system shall be submitted to field experimentation, where the data are recorded from reactions resulting of the variables the researcher enter in the experiment; as previously stated, the variables are not controlled, because the RSs are developed for human beings, whose tastes, situations and profiles are different. To strengthen this point of view, we must also analyse the psychological relationships between tourism and psychology BIB002 or recommendation projects that use psychology to improve their recommendations BIB003 BIB007 BIB005 . We believe that the RSs cannot lose their target, which is the human being and the context in which it is presented. That is, the individual plays an extremely important role in this process. Nevertheless, projects counting on the participation of volunteers to assess their systems, thus seeking an online evaluation, have as possibilities the Open Source Social Network (OSSN), a rapid development social networking software, but then it would be needed to recruit volunteers to feed those OSSNs, which is laborious. Another option for projects that need user interaction is Diaspora 8 , an SN launched in 2010 that already has 600 thousand users, where the user "owns" his data and has the power to share it as he wants. Therefore, with the request and acceptance, these data could be used. Although further studies are needed to assess the benefits of the online evaluation, it is vital to encourage the forthcoming projects to ask for feedback from the users, who are the main beneficiaries of the recommenders. This way, it would be possible to widely explore the influence and the impact of SNs in all the aspects of the RSs in the tourism sector. We expect that the clarification of which SNs were used in the recommendation projects may contribute in encouraging the use of SNs as a method of nourishing their RSs in new projects, since nowadays its use is simple and accessible to any researcher. In general words, we hope to contribute to make an approach about the recommendation systems and SNs to cover existing definition in the literature, their types and characteristics; we also hope that the state-of-the-art knowledge here generated can support researchers and practical professionals in their understanding of developments in RS applications. With regard to the challenges of future investigations, it is important to emphasise that we did not find works of RSs for the sector of tourism that use human personality to enrich the user profile so that different aspects can be taken into account BIB004 BIB001 . Also, we consider that the generation of recommendation in the tourism sector based on SNs and that somehow consider the human personality will have a start of importance. In this sense, the first steps have already been taken in other areas of knowledge in the industry, and this will not be different.
|
Systematic Literature Review (SLR) of Resource Scheduling and Security in Cloud Computing <s> I. INTRODUCTION <s> Cloud Computing, the long-held dream of computing as a utility, has the potential to transform a large part of the IT industry, making software even more attractive as a service and shaping the way IT hardware is designed and purchased. Developers with innovative ideas for new Internet services no longer require the large capital outlays in hardware to deploy their service or the human expense to operate it. They need not be concerned about overprovisioning for a service whose popularity does not meet their predictions, thus wasting costly resources, or underprovisioning for one that becomes wildly popular, thus missing potential customers and revenue. Moreover, companies with large batch-oriented tasks can get results as quickly as their programs can scale, since using 1000 servers for one hour costs no more than using one server for 1000 hours. This elasticity of resources, without paying a premium for large scale, is unprecedented in the history of IT. Cloud Computing refers to both the applications delivered as services over the Internet and the hardware and systems software in the datacenters that provide those services. The services themselves have long been referred to as Software as a Service (SaaS). The datacenter hardware and software is what we will call a Cloud. When a Cloud is made available in a pay-as-you-go manner to the general public, we call it a Public Cloud; the service being sold is Utility Computing. We use the term Private Cloud to refer to internal datacenters of a business or other organization, not made available to the general public. Thus, Cloud Computing is the sum of SaaS and Utility Computing, but does not include Private Clouds. People can be users or providers of SaaS, or users or providers of Utility Computing. We focus on SaaS Providers (Cloud Users) and Cloud Providers, which have received less attention than SaaS Users. From a hardware point of view, three aspects are new in Cloud Computing. <s> BIB001 </s> Systematic Literature Review (SLR) of Resource Scheduling and Security in Cloud Computing <s> I. INTRODUCTION <s> Cloud computing is an emerging technology. It process huge amount of data so scheduling mechanism works as a vital role in the cloud computing. Thus my protocol is designed to minimize the switching time, improve the resource utilization and also improve the server performance and throughput. This method or protocol is based on scheduling the jobs in the cloud and to solve the drawbacks in the existing protocols. Here we assign the priority to the job which gives better performance to the computer and try my best to minimize the waiting time and switching time. Best effort has been made to manage the scheduling of jobs for solving drawbacks of existing protocols and also improvise the efficiency and throughput of the server. <s> BIB002 </s> Systematic Literature Review (SLR) of Resource Scheduling and Security in Cloud Computing <s> I. INTRODUCTION <s> To program in distributed computing environments such as grids and clouds, workflow is adopted as an attractive paradigm for its powerful ability in expressing a wide range of applications, including scientific computing, multi-tier Web, and big data processing applications. With the development of cloud technology and extensive deployment of cloud platform, the problem of workflow scheduling in cloud becomes an important research topic. The challenges of the problem lie in: NP-hard nature of task-resource mapping; diverse QoS requirements; on-demand resource provisioning; performance fluctuation and failure handling; hybrid resource scheduling; data storage and transmission optimization. Consequently, a number of studies, focusing on different aspects, emerged in the literature. In this paper, we firstly conduct taxonomy and comparative review on workflow scheduling algorithms. Then, we make a comprehensive survey of workflow scheduling in cloud environment in a problem---solution manner. Based on the analysis, we also highlight some research directions for future investigation. <s> BIB003 </s> Systematic Literature Review (SLR) of Resource Scheduling and Security in Cloud Computing <s> I. INTRODUCTION <s> Abstract Sensor device is emerging as a promising enabler for the development of new solutions in a plethora of Internet of Things (IoT) applications. With the explosion of connected devices, it is essential for conversion gateway between the Internet and sensor nodes to support end-to-end (e2e) interoperability because the current Internet Protocol (IT) does not support end-to-end delay in IEEE 802.15.4e. As part of IoT, we propose a scheduling scheme of multiple channels and multiple timeslots to minimize the e2e delay in multi-hop environments. The proposed greedy heuristic approach is compared with the meta-heuristics in terms of the given end-to-end delay bound. Although the meta-heuristics is more accurate in finding a global optimum or sub-optimal values than the greedy heuristic approach, this advantage comes at the expense of high complexity. The simulation results show that the proposed scheme reduces the complexity by obtaining suboptimal solutions that satisfy the e2e delay requirement. <s> BIB004
|
Scheduling in cloud computing is a process or mechanism applied 'to minimise wasting limited resources by efficiently allocating them among all active nodes' BIB004 . Nodes or virtual machines (VMs) are the virtual resources that are assigned to consumers for running the service and executing tasks BIB001 . Scheduling is a very complex operation in cloud computing used to allocate resources, improving server utilisation, enhance service performance, and executing tasks BIB002 . Scheduling can use either static or dynamic methods for scheduling resources in cloud computing. These methods can provide sufficient use of cloud resources to meet Quality of Service (QoS) requirements BIB003 . Furthermore, using scheduling techniques can avoid conflicts in allocating active resources. For example, scheduling can avoid duplication of allocating the same virtual resource at one time. Also, it can help manage limited resources by handling high demand of requests by using dynamic method that can update the system regularly and to execute tasks over resources based on the resources availability. However, there are some issues need to be considered such as security, limited resources, virtual machines and applications. Executing and running tasks over the allocated resources raises some security issues that need to be considered such as data security, and service security. Data security includes privacy, integrity, protection from any threats and attacks. Service security includes resource security, and privacy. So, there is a need to consider these issues and the security constraints, including data security, and availability to get an optimised resource scheduling. For this research, the main focus will be on resource scheduling mechanisms when security is factored into the cloud model. According to Singh and Chana, , there are two main objectives for resource scheduling as follows: 1) Workloads refer to the tasks that consumers want to run over the resources. So, identifying suitable resources for scheduling workloads on time will help to enhance the effectiveness of resource utilisation. 2) To identify heterogeneous multiple workloads to fulfil the Quality of Service (QoS) requirements, such as CPU utilisation, availability, reliability and security. This paper focuses on searching and reviewing prior research relevant to resource scheduling and security in cloud computing and to identify possible existing gaps. This paper is organised as follows. Section II gives an over view of cloud computing including cloud definition, cloud architecture, obstacles facing cloud growth, research method, explaining why SLR is important to this research, and research questions and research scope. Section III discusses the search strategy including identifying the search period, search strings, search engines. Section IV describes the inclusion/exclusion criteria, and the procedures of selection. Section V explains the aim of SLR. Section VI presents how the data will be extracted from each paper. Section VII discusses the synthesis strategy and the threats of validity. Section VIII explains limitation and factors that could affect this research. Section IX presents discussion of current finding and proposed solution. Section X concludes with suggestion and future work.
|
Systematic Literature Review (SLR) of Resource Scheduling and Security in Cloud Computing <s> A. Cloud Definition <s> Cloud computing that provides cheap and pay-as-you-go computing resources is rapidly gaining momentum as an alternative to traditional IT Infrastructure. As more and more consumers delegate their tasks to cloud providers, Service Level Agreements(SLA) between consumers and providers emerge as a key aspect. Due to the dynamic nature of the cloud, continuous monitoring on Quality of Service (QoS) attributes is necessary to enforce SLAs. Also numerous other factors such as trust (on the cloud provider) come into consideration, particularly for enterprise customers that may outsource its critical data. This complex nature of the cloud landscape warrants a sophisticated means of managing SLAs. This paper proposes a mechanism for managing SLAs in a cloud computing environment using the Web Service Level Agreement(WSLA) framework, developed for SLA monitoring and SLA enforcement in a Service Oriented Architecture (SOA). We use the third party support feature of WSLA to delegate monitoring and enforcement tasks to other entities in order to solve the trust issues. We also present a real world use case to validate our proposal. <s> BIB001 </s> Systematic Literature Review (SLR) of Resource Scheduling and Security in Cloud Computing <s> A. Cloud Definition <s> Cloud Computing, the long-held dream of computing as a utility, has the potential to transform a large part of the IT industry, making software even more attractive as a service and shaping the way IT hardware is designed and purchased. Developers with innovative ideas for new Internet services no longer require the large capital outlays in hardware to deploy their service or the human expense to operate it. They need not be concerned about overprovisioning for a service whose popularity does not meet their predictions, thus wasting costly resources, or underprovisioning for one that becomes wildly popular, thus missing potential customers and revenue. Moreover, companies with large batch-oriented tasks can get results as quickly as their programs can scale, since using 1000 servers for one hour costs no more than using one server for 1000 hours. This elasticity of resources, without paying a premium for large scale, is unprecedented in the history of IT. Cloud Computing refers to both the applications delivered as services over the Internet and the hardware and systems software in the datacenters that provide those services. The services themselves have long been referred to as Software as a Service (SaaS). The datacenter hardware and software is what we will call a Cloud. When a Cloud is made available in a pay-as-you-go manner to the general public, we call it a Public Cloud; the service being sold is Utility Computing. We use the term Private Cloud to refer to internal datacenters of a business or other organization, not made available to the general public. Thus, Cloud Computing is the sum of SaaS and Utility Computing, but does not include Private Clouds. People can be users or providers of SaaS, or users or providers of Utility Computing. We focus on SaaS Providers (Cloud Users) and Cloud Providers, which have received less attention than SaaS Users. From a hardware point of view, three aspects are new in Cloud Computing. <s> BIB002 </s> Systematic Literature Review (SLR) of Resource Scheduling and Security in Cloud Computing <s> A. Cloud Definition <s> Cloud computing has elevated IT to newer limits by offering the market environment data storage and capacity with flexible scalable computing processing power to match elastic demand and supply, whilst reducing capital expenditure. However the opportunity cost of the successful implementation of Cloud computing is to effectively manage the security in the cloud applications. Security consciousness and concerns arise as soon as one begins to run applications beyond the designated firewall and move closer towards the public domain. The purpose of the paper is to provide an overall security perspective of Cloud computing with the aim to highlight the security concerns that should be properly addressed and managed to realize the full potential of Cloud computing. Gartner's list on cloud security issues, as well the findings from the International Data Corporation enterprise panel survey based on cloud threats, will be discussed in this paper. <s> BIB003 </s> Systematic Literature Review (SLR) of Resource Scheduling and Security in Cloud Computing <s> A. Cloud Definition <s> Cloud computing is a way to increase the capacity or add capabilities dynamically without investing in new infrastructure, training new personnel, or licensing new software. It extends Information Technology's (IT) existing capabilities. In the last few years, cloud computing has grown from being a promising business concept to one of the fast growing segments of the IT industry. But as more and more information on individuals and companies are placed in the cloud, concerns are beginning to grow about just how safe an environment it is. Despite of all the hype surrounding the cloud, enterprise customers are still reluctant to deploy their business in the cloud. Security is one of the major issues which reduces the growth of cloud computing and complications with data privacy and data protection continue to plague the market. The advent of an advanced model should not negotiate with the required functionalities and capabilities present in the current model. A new model targeting at improving features of an existing model must not risk or threaten other important features of the current model. The architecture of cloud poses such a threat to the security of the existing technologies when deployed in a cloud environment. Cloud service users need to be vigilant in understanding the risks of data breaches in this new environment. In this paper, a survey of the different security risks that pose a threat to the cloud is presented. This paper is a survey more specific to the different security issues that has emanated due to the nature of the service delivery models of a cloud computing system. <s> BIB004 </s> Systematic Literature Review (SLR) of Resource Scheduling and Security in Cloud Computing <s> A. Cloud Definition <s> Cloud computing, with its promise of (almost) unlimited computation, storage, and bandwidth, is increasingly becoming the infrastructure of choice for many organizations. As cloud offerings mature, service-based applications need to dynamically recompose themselves to self-adapt to changing QoS requirements. In this paper, we present a decentralized mechanism for such self-adaptation, using market-based heuristics. We use a continuous double-auction to allow applications to decide which services to choose, among the many on offer. We view an application as a multi-agent system and the cloud as a marketplace where many such applications self-adapt. We show through a simulation study that our mechanism is effective for the individual application as well as from the collective perspective of all applications adapting at the same time. <s> BIB005 </s> Systematic Literature Review (SLR) of Resource Scheduling and Security in Cloud Computing <s> A. Cloud Definition <s> The main issues in a cloud based environment are security, process fail rate and performance. Fault tolerance plays a key role in ensuring high serviceability and reliability in cloud. Nowadays, demands for high fault tolerance, high serviceability and high reliability are becoming unprecedentedly strong, building a high fault tolerance, high serviceability and high reliability cloud is a critical, challenging, and urgently required task. A lot of research is currently underway to analyze how clouds can provide fault tolerance for an application. When numbers of processes are too many and any virtual machine is overloaded then the processes are failed causing lot of rework and annoyance for the users. The major cause of the failure of the processes at the virtual machine level are overloading of virtual machines, extra resource requirements of the existing processes etc. This paper introduces dynamic load balancing techniques for cloud environment in which RAM/Broker (resource awareness module) proactively decides whether the process can be applied on an existing virtual machine or it should be assigned to a different virtual machine created a fresh or any other existing virtual machine. So, in this way it can tackle the occurrence of fault. This paper also proposed a mechanism which proactively decides the load on virtual machines and according to the requirement either creates a new virtual machine or uses an existing virtual machine for the assigning the process. Once a process completes it will update the virtual machine status on the broker service so that other processes can be assigned to it. <s> BIB006 </s> Systematic Literature Review (SLR) of Resource Scheduling and Security in Cloud Computing <s> A. Cloud Definition <s> The cloud computing is a resource and offers computer assets with services instead of a deliverable product which allows storage and shar- ing of the files of multiple types like audio, video, software's, data files and many more. The data is shared over internet cloud storage and can be accessed for free and also at an affordable price. The effective way of sharing the information and technology by collaboration the real world to availed the competitive advantages. This paper makes a brief description about the cloud computing and its scaling techniques the main explanation is on vertical scaling and horizontal scaling with examples. <s> BIB007
|
There are many different definitions of cloud computing, the National Institute of Standard and Technology (NIST) gives a basic definition of cloud computing as "a model for enabling convenient, on-demand network access to a shared pool configurable computing resources (e.g. networks, servers, storage, application, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction". To obtain a cloud service a consumer needs to contact a service provider. This communication process makes the consumer and the provider reach an agreement of the level of the service. This agreement referred to Service Level Agreement (SLA) BIB001 . This SLA is the basis for the expected level of the service between the consumer and the provider. The provider of a cloud architecture can offer various services to a consumer. Quality of Service (QoS) refers to cloud stakeholders expectation of obtaining a desirable service meeting requirements such as timeliness, scalability, high availability, trust and security specified in the Service Level of Agreement SLA . For this research, Quality of Service (QoS) includes the following concerns: • Security: Security is a shared responsibility between cloud providers and consumers to ensure that the level of security is at a desired level. Consumers need to be aware of security from their side and protect their service from any threats. Cloud providers are able to achieve better scalability by running multiple virtual machines on physical machines. They have to defend the service against any security risks from any unauthorised physical access, data security, security software, and resource security. Other cloud providers who do not use virtual machine have to secure servers and data storage from any security risks. Then any security risks in the virtualisation technology that allows co-occupant virtual machines to make unauthorised access could compromise information assets of consumers BIB005 . • Service Performance: A consumer requires a certain level of the service performance will need provider guarantees to run the require service in the cloud. The Service Level Agreement (SLA) is an agreement between a service provider and a consumer, that specify the level of the service provided [9] . Also, both provider and consumer follow the rules and conditions of this agreement to keep the service secure without any security issues. These services can vary both in terms of functionality (such as storage capacity or processor count) or in terms of the Quality of Service (QoS) provided . In terms of the QoS a provider will offer a defined SLA which the consumer can use when determining the 'best' provider for their needs. According to Mell and Grance , the cloud architecture is a combination of the following three components: • Essential Characteristics: The essential characteristics refer to a set of cloud features that allow providers and consumers managing, accessing, and measuring the cloud services and resources. These characteristics provide cloud providers and consumers different level of control to measure and provision the service. From a security prospective, each characteristic has a different security concern for both provider and consumers. These security concerns include access control and data security BIB002 . Access control includes accessing, managing the service, and access availability. Data security includes data confidentiality, date integrity, and data availability. The five essential cloud characteristics are: 1) On-demand self-service: A consumer can manage and control the service resources such as sever time and network storage without any physical interactions by the provider. 2) Broad network access: Consumers can access and use the cloud service from anywhere across the network. 3) Resource pooling: Providers can serve consumers with different resources according to consumer demand. Resources such as storage, processing, physical machine, and network bandwidth BIB002 . Consumers do not need to be concerned about physical location of resources. 4) Rapid elasticity: Resources can be rapidly scaled outward or inward at any time according to consumers demand. 5) Measured service: Measured service is the ability to track and control the usage of the resources which can be performed by consumers. • Service Models: Service models in the cloud defines to a consumer the type of the system management and system operations and type of the access to cloud systems. According to Nallur et al. BIB005 service availability, security and performance are the main elements that are considered to affect cloud service in the service models. Based on SLA, consumers have to trust the provider on the service availability and the only concern is if there is any downtime to the service it will be the time of the service recovery to obtain the service again. The recovery process is the responsibility of the service provider based on the SLA. Both provider and consumers are involve in security such as data security and protection. The provider is concerned about providing a secure and reliable service via the network. Service performance means that the service provided to consumers at a satisfactory level and good quality. There are three types of service model, each service model provides different capabilities to obtain the service: 1) Infrastructure as a Service (IaaS): To provide a basic form of the service such as a virtual machine (VM), virtual storage and network bandwidth BIB005 . Consumers have to configure the setting and install any needed operating system and software before running the service. One of the main security concerns in IaaS for the provider is to check that there is no VMs interference while the service is running. 2) Software as a Service (SaaS): Here, software and applications are provided by the cloud provider which lets consumers use these applications. Consumers can have access to the service from different devices via different interface such as web browser and a program interface. One security concern needs to be considered is web browser security. The level of the browser security is very important, weak browser security can let an attacker get important information or hijack the consumer resources and data. 3) Platform as a Service (PaaS): In this form, the cloud provider provides a platform that allows the consumer to develop their application but the cloud provider is still responsible for maintenance and all upgrades of the platform. Table I shows the main security issues that exists for each service model BIB004 . From Table I , SaaS has the most security issues because it is more complex than the other service models. PaaS and IaaS have less security issues compared with SaaS because they have better control over the security and they are not involved in the application level. Table I shows the responsibility perspective for the the security issues for providers and consumers. These issues are different in terms of responsibilities from the providers and consumers. The table shows that most responsibility to ensure the security level of the service is on the providers. The providers responsibilities include data (security, locality, segregation, confidentiality), network security, authentication and authorisation, vulnerability in virtualisation, availability, and identity management. Using secure web applications to access the service is mostly the responsibility of the consumers. The other security issues such as data access, data breaches and backup are shared responsibilities for providers and consumers. These security issues affect differently each service model. These issues are: • Data Security: Providers need to use good techniques to secure data access such as encryption and decryption. • Network Security: To secure the data flow through the network from any security breach or leakage. • Data Locality: To manage storing consumers data in a reliable location and to protect it from any risks. • Data Integrity: To make sure that data is stored and then it correctly and accurately flow through the database over the service. • Data Segregation: To secure the data flow, and data storage from any intrusions hacking the system on each level of the service. • Data Access: To control data access for consumers. • Authorisation, Authentication: To manage accessing to the service or database. • Data Confidentiality: To control and protect the data flow on each level of the service. • Web Application Security: Consumers need to ensure their web applications are secure to access to the service. • Data Breaches: Providers need to protect data and prevent any indirect access. • Vulnerability in Virtualisation: Providers need to ensure that each tasks executed separately from each other to reduce security risks that could occur. • Availability: Providers need to ensure that the service is delivered on demand. • Backup: The backup information is important and if it has been hacked then any unauthorised accessed will cause a security issues for the consumers. Providers need to ensure that backup is taken regularly and be secured and encrypted to make the service more reliable and fast recovery when it required. • Identity Management: To control and check the identity of accessing to the service and resources by identifying all information that used to log in. Subashini and Kavitha BIB004 claim that the security issues in the service models such as data security and network security make a significant trade-off to each service model to obtain a reliable, trusted and secure services. These service models offer different features to customers and providers to operate the service. SaaS offers many significant benefits to customers such as service efficiency improvement and reduced costs. In SaaS providers do all provisioning for hardware, data storage, power, virtual resources. As a result consumers have to pay for what they use, and there is no upfront cost for anything else. With all the benefits that are provided in SaaS, it has some issues such as lack of visibility of data stored and security. In PaaS users can build their application on top of the platform, but this feature raises the security risks for all the services. Building applications on top of the platform increase security risks such as data security and network intrusion by unlocking the way to intruders trying any unauthorised actions BIB004 . For example, hackers can attack the applications code and run a very large amount of malicious programs to attack the service. In IaaS, consumers can get services with less cost with basic security configuration and less load balance. Providers have to ensure that the service infrastructure is highly secure for, data storage, data security, data transmission, and network security. • Deployment Models: Deployment models are to describe how the cloud services deliver to consumers. According to there are many security concerns on the cloud deployment models including data privacy and trust, policies, and data transfer. As a result of these concerns providers have to secure cloud services. Also, providers need to apply security policies that can handle data access and security. The four deployment models, which specify the availability of using cloud service , are: 1) Public: To specify that cloud services is accessible with no restriction for all users. 2) Private: To make cloud services available to particular single group. 3) Community: To make cloud services shared between limited group sharing similar concerns. 4) Hybrid: A hybrid cloud includes services using multiple cloud combined together, for example joining services and making some parts private and other parts public or community BIB006 . The common security issues that need to be addressed for these deployment models are authentication, authorisation, availability, access control and data security. These security issues are so important because each deployment model has a different security level. For example, public cloud is less secure than the other cloud model, so it is more likely to be attacked by malicious hackers to get information that can used then to be hack at the private level. Providers are responsible for service security and they have to stop any unauthorised access or any malicious attacks of the service. Suspicious behaviour includes any malicious attacks and abuse of the service. Consumers take responsibility for information security and data security such as integrity, confidentiality, authorisation and authentication. There is a list of the top ten obstacles facing cloud computing in BIB002 summarised in Table II . Armbrust et al. BIB002 indicate that the consideration for each obstacle will vary from one stakeholder to another (consumer and provider). The first obstacle is Service Availability which has multiple sides. One side is cloud providers offer multiple sites to improve availability, however, consumers may choose to use multiple providers to increase availability. As a result, some parts of the services may become unavailable for some consumers for any time. There are many reasons that can cause service unavailability such as crashed applications, high loads in the service, and service hijack BIB003 . Then the consumers will think that the service was down and it is not available to be used. However, services with multiple clouds or multiple sites give more opportunities for an attacker to cause a security threat. An attacker can use a public service to get to unauthorised access to resources or by doing many malicious activities that affect the service. One way to defend this issue is to use quick scale-up method and security monitoring BIB002 . Scaling method in the cloud is used to control cloud resources, which include two type of scaling, horizontal and vertical BIB007 . The vertical or scale up is used to increase the virtual resources for restoring and improving performance also known as scaling outward. Service Availability is an issue that can be addressed using this method if any virtual resource becomes unavailable. The horizontal method is to scale upward by running the service in one physical resource. Providing the service from one physical resource or one site is an issue of the service availability. The second, third and fourth obstacles are about data boundaries between platforms and Data Storage, Data Confidentiality, and Data Transfer. There are many security implications should be considered include loosing data, data leakage, transferring data, and data security. The fifth, sixth, seventh, and eighth obstacles are more technical being related to performance, Scalable Storage, removing errors in a large scale distributing system, and how services can be established with quick scaling getting an overview of service costs. Quick scaling could cause unavailability of the service if there is a very high load tasks which needs to be considered as a security implication of this method. The ninth and tenth obstacles are about service policies and Service Level Agreement (SLA) and Software Licence. The concern here is about the eligibility or the authorisation of using the software and to ensure there is no misuse of the licence BIB002 .
|
Systematic Literature Review (SLR) of Resource Scheduling and Security in Cloud Computing <s> VI. DATA EXTRACTION <s> Cloud computing has elevated IT to newer limits by offering the market environment data storage and capacity with flexible scalable computing processing power to match elastic demand and supply, whilst reducing capital expenditure. However the opportunity cost of the successful implementation of Cloud computing is to effectively manage the security in the cloud applications. Security consciousness and concerns arise as soon as one begins to run applications beyond the designated firewall and move closer towards the public domain. The purpose of the paper is to provide an overall security perspective of Cloud computing with the aim to highlight the security concerns that should be properly addressed and managed to realize the full potential of Cloud computing. Gartner's list on cloud security issues, as well the findings from the International Data Corporation enterprise panel survey based on cloud threats, will be discussed in this paper. <s> BIB001 </s> Systematic Literature Review (SLR) of Resource Scheduling and Security in Cloud Computing <s> VI. DATA EXTRACTION <s> The cloud computing is a resource and offers computer assets with services instead of a deliverable product which allows storage and shar- ing of the files of multiple types like audio, video, software's, data files and many more. The data is shared over internet cloud storage and can be accessed for free and also at an affordable price. The effective way of sharing the information and technology by collaboration the real world to availed the competitive advantages. This paper makes a brief description about the cloud computing and its scaling techniques the main explanation is on vertical scaling and horizontal scaling with examples. <s> BIB002
|
For this work, the following proposed questions shown in Table III will be performed for data extraction from each paper: What was the research question(s)? 2 What sort of models are used to test the ideas in in the paper(s)? 3 Which security issues (Table I) What cloud implementation(s) have been used? 7 What was the outcome measure(s)? 8 What was the reported outcome of the research? 9 Original work or replication 10 Appropriate data analysis 11 Was there a clear link between data and conclusion? 12 What were the research hypotheses? 13 Was the context discussed? 14 Was any comparison discussed? BIB001 Were the research questions answered? BIB002 What is the approach of the paper including the basic design?
|
Systematic Literature Review (SLR) of Resource Scheduling and Security in Cloud Computing <s> IX. DISCUSSION <s> Utility functions provide a natural and advantageous framework for achieving self-optimization in distributed autonomic computing systems. We present a distributed architecture, implemented in a realistic prototype data center, that demonstrates how utility functions can enable a collection of autonomic elements to continually optimize the use of computational resources in a dynamic, heterogeneous environment. Broadly, the architecture is a two-level structure of independent autonomic elements that supports flexibility, modularity, and self-management. Individual autonomic elements manage application resource usage to optimize local service-level utility functions, and a global arbiter allocates resources among application environments based on resource-level utility functions obtained from the managers of the applications. We present empirical data that demonstrate the effectiveness of our utility function scheme in handling realistic, fluctuating Web-based transactional workloads running on a Linux cluster. <s> BIB001 </s> Systematic Literature Review (SLR) of Resource Scheduling and Security in Cloud Computing <s> IX. DISCUSSION <s> We introduce HAIL (High-Availability and Integrity Layer), a distributed cryptographic system that allows a set of servers to prove to a client that a stored file is intact and retrievable. HAIL strengthens, formally unifies, and streamlines distinct approaches from the cryptographic and distributed-systems communities. Proofs in HAIL are efficiently computable by servers and highly compact---typically tens or hundreds of bytes, irrespective of file size. HAIL cryptographically verifies and reactively reallocates file shares. It is robust against an active, mobile adversary, i.e., one that may progressively corrupt the full set of servers. We propose a strong, formal adversarial model for HAIL, and rigorous analysis and parameter choices. We show how HAIL improves on the security and efficiency of existing tools, like Proofs of Retrievability (PORs) deployed on individual servers. We also report on a prototype implementation. <s> BIB002 </s> Systematic Literature Review (SLR) of Resource Scheduling and Security in Cloud Computing <s> IX. DISCUSSION <s> Cloud computing as newly emergent computing environment offers dynamic flexible infrastructures and QoS guaranteed services in pay-as-you-go manner to the public. System virtualization technology which renders flexible and scalable system services is the base of the cloud computing. How to provide a self-managing and autonomic infrastructure for cloud computing through virtualization becomes an important challenge. In this paper, using feedback control theory, we present VM-based architecture for adaptive management of virtualized resources in cloud computing and model an adaptive controller that dynamically adjusts multiple virtualized resources utilization to achieve application Service Level Objective (SLO) in cloud computing. Compared with Xen, KVM is chosen as a virtual machine monitor (VMM) to implement the architecture. Evaluation of the proposed controller model showed that the model could allocate resources reasonably in response to the dynamically changing resource requirements of different applications which execute on different VMs in the virtual resource pool to achieve applications SLOs. <s> BIB003 </s> Systematic Literature Review (SLR) of Resource Scheduling and Security in Cloud Computing <s> IX. DISCUSSION <s> A system consisting of a number of servers, where demands of different types arrive in bursts (modelled by interrupted Poisson processes), is examined in the steady state. The problem is to decide how many servers to allocate to each job type, so as to minimize a cost function expressed in terms of average queue sizes. First, an exact analysis is provided for an isolated IPP/M/n queue. The results are used to compute the optimal static server allocation policy. The latter is then compared to four heuristic policies which employ dynamic switching of servers from one queue to another (such switches take time and hence incur costs). <s> BIB004 </s> Systematic Literature Review (SLR) of Resource Scheduling and Security in Cloud Computing <s> IX. DISCUSSION <s> Cloud-based services are an attractive deployment model for user-facing applications like word processing and calendaring. Unlike desktop applications, cloud services allow multiple users to edit shared state concurrently and in real-time, while being scalable, highly available, and globally accessible. Unfortunately, these benefits come at the cost of fully trusting cloud providers with potentially sensitive and important data. ::: ::: To overcome this strict tradeoff, we present SPORC, a generic framework for building a wide variety of collaborative applications with untrusted servers. In SPORC, a server observes only encrypted data and cannot deviate from correct execution without being detected. SPORC allows concurrent, low-latency editing of shared state, permits disconnected operation, and supports dynamic access control even in the presence of concurrency. We demonstrate SPORC's flexibility through two prototype applications: a causally-consistent key-value store and a browser-based collaborative text editor. ::: ::: Conceptually, SPORC illustrates the complementary benefits of operational transformation (OT) and fork* consistency. The former allows SPORC clients to execute concurrent operations without locking and to resolve any resulting conflicts automatically. The latter prevents a misbehaving server from equivocating about the order of operations unless it is willing to fork clients into disjoint sets. Notably, unlike previous systems, SPORC can automatically recover from such malicious forks by leveraging OT's conflict resolution mechanism. <s> BIB005 </s> Systematic Literature Review (SLR) of Resource Scheduling and Security in Cloud Computing <s> IX. DISCUSSION <s> This paper presents Venus, a service for securing user interaction with untrusted cloud storage. Specifically, Venus guarantees integrity and consistency for applications accessing a key-based object store service, without requiring trusted components or changes to the storage provider. Venus completes all operations optimistically, guaranteeing data integrity. It then verifies operation consistency and notifies the application. Whenever either integrity or consistency is violated, Venus alerts the application. We implemented Venus and evaluated it with Amazon S3 commodity storage service. The evaluation shows that it adds no noticeable overhead to storage operations. <s> BIB006 </s> Systematic Literature Review (SLR) of Resource Scheduling and Security in Cloud Computing <s> IX. DISCUSSION <s> Cloud computing has become a popular choice as an alternative to investing new IT systems. When making decisions on adopting cloud computing related solutions, security has always been a major concern. This article summarizes security concerns in cloud computing and proposes five service deployment models to ease these concerns. The proposed models provide different security related features to address different requirements and scenarios and can serve as reference models for deployment. <s> BIB007 </s> Systematic Literature Review (SLR) of Resource Scheduling and Security in Cloud Computing <s> IX. DISCUSSION <s> This paper reviews current cloud computing business models and presents proposals on how organisations can achieve sustainability by adopting appropriate models. We classify cloud computing business models into eight types: (1) Service Provider and Service Orientation; (2) Support and Services Contracts; (3) In- House Private Clouds; (4) All-In-One Enterprise Cloud; (5) One-Stop Resources and Services; (6) Government funding; (7) Venture Capitals; and (8) Entertainment and Social Networking. Using the Jericho Forum’s ‘Cloud Cube Model’ (CCM), the paper presents a summary of the eight business models. We discuss how the CCM fits into each business model, and then based on this discuss each business model’s strengths and weaknesses. We hope adopting an appropriate cloud computing business model will help organisations investing in this technology to stand firm in the economic downturn. <s> BIB008 </s> Systematic Literature Review (SLR) of Resource Scheduling and Security in Cloud Computing <s> IX. DISCUSSION <s> In computing clouds, it is desirable to avoid wasting resources as a result of under-utilization and to avoid lengthy response times as a result of over-utilization. In this paper, we propose a new approach for dynamic autonomous resource management in computing clouds. The main contribution of this work is two-fold. First, we adopt a distributed architecture where resource management is decomposed into independent tasks, each of which is performed by Autonomous Node Agents that are tightly coupled with the physical machines in a data center. Second, the Autonomous Node Agents carry out configurations in parallel through Multiple Criteria Decision Analysis using the PROMETHEE method. Simulation results show that the proposed approach is promising in terms of scalability, feasibility and flexibility. <s> BIB009 </s> Systematic Literature Review (SLR) of Resource Scheduling and Security in Cloud Computing <s> IX. DISCUSSION <s> The paper describes the design, implementation, and evaluation of Depot, a cloud storage system that minimizes trust assumptions. Depot tolerates buggy or malicious behavior by any number of clients or servers, yet it provides safety and liveness guarantees to correct clients. Depot provides these guarantees using a two-layer architecture. First, Depot ensures that the updates observed by correct nodes are consistently ordered under Fork-Join-Causal consistency (FJC). FJC is a slight weakening of causal consistency that can be both safe and live despite faulty nodes. Second, Depot implements protocols that use this consistent ordering of updates to provide other desirable consistency, staleness, durability, and recovery properties. Our evaluation suggests that the costs of these guarantees are modest and that Depot can tolerate faults and maintain good availability, latency, overhead, and staleness even when significant faults occur. <s> BIB010 </s> Systematic Literature Review (SLR) of Resource Scheduling and Security in Cloud Computing <s> IX. DISCUSSION <s> In the commercial world, various computing needs are provided as a service. Service providers meet these computing needs in different ways, for example, by maintaining software or purchasing expensive hardware. Security is one of the most critical aspects in a cloud computing environment due to the sensitivity and importance of information stored in the cloud. The risk of malicious insiders in the cloud and the failure of cloud services have received a great deal of attention by companies. This paper focuses on issues related to data security and privacy in cloud computing and proposes a new model, called Multi-Cloud Databases (MCDB). The purpose of the proposed new model is to address security and privacy risks in the cloud computing environment. Three security issues will be examined in our proposed model: data integrity, data intrusion, and service availability. <s> BIB011 </s> Systematic Literature Review (SLR) of Resource Scheduling and Security in Cloud Computing <s> IX. DISCUSSION <s> We present BlueSky, a network file system backed by cloud storage. BlueSky stores data persistently in a cloud storage provider such as Amazon S3 or Windows Azure, allowing users to take advantage of the reliability and large storage capacity of cloud providers and avoid the need for dedicated server hardware. Clients access the storage through a proxy running on-site, which caches data to provide lower-latency responses and additional opportunities for optimization. We describe some of the optimizations which are necessary to achieve good performance and low cost, including a log-structured design and a secure in-cloud log cleaner. BlueSky supports multiple protocols--both NFS and CIFS--and is portable to different providers. <s> BIB012 </s> Systematic Literature Review (SLR) of Resource Scheduling and Security in Cloud Computing <s> IX. DISCUSSION <s> Data security is one of the biggest concerns in adopting Cloud computing. In Cloud environment, users remotely store their data and relieve themselves from the hassle of local storage and maintenance. However, in this process, they lose control over their data. Existing approaches do not take all the facets into consideration viz. dynamic nature of Cloud, computation & communication overhead etc. In this paper, we propose a Data Storage Security Model to achieve storage correctness incorporating Cloud’s dynamic nature while maintaining low computation and communication cost. <s> BIB013 </s> Systematic Literature Review (SLR) of Resource Scheduling and Security in Cloud Computing <s> IX. DISCUSSION <s> In this study, we design, develop, and simulate a cloud resources pricing model that satisfies two important constraints: the dynamic ability of the model to provide a high satisfaction guarantee measured as Quality of Service (QoS) - from users perspectives, profitability constraints - from the cloud service providers perspectives We employ financial option theory and treat the cloud resources as underlying assets to capture the realistic value of the cloud compute commodities (C3). We then price the cloud resources using our model. We discuss the results for four different metrics that we introduce to guarantee the quality of service and price as follows: (a) Moore's law based depreciation of asset values, (b) new technology based volatility measures in capturing price changes, (c) a new financial option pricing based model combining the above two concepts, and (d) the effect of age of resources and depreciation of cloud resource on QoS. We show that the cloud parameters can be mapped to financial economic model and we discuss the results of cloud compute commodity pricing for various parameters, such as the age of the resource, quality of service, and contract period. <s> BIB014 </s> Systematic Literature Review (SLR) of Resource Scheduling and Security in Cloud Computing <s> IX. DISCUSSION <s> The increasing popularity of cloud storage services has lead companies that handle critical data to think about using these services for their storage needs. Medical record databases, large biomedical datasets, historical information about power systems and financial data are some examples of critical data that could be moved to the cloud. However, the reliability and security of data stored in the cloud still remain major concerns. In this work we present DepSky, a system that improves the availability, integrity, and confidentiality of information stored in the cloud through the encryption, encoding, and replication of the data on diverse clouds that form a cloud-of-clouds. We deployed our system using four commercial clouds and used PlanetLab to run clients accessing the service from different countries. We observed that our protocols improved the perceived availability, and in most cases, the access latency, when compared with cloud providers individually. Moreover, the monetary costs of using DepSky in this scenario is at most twice the cost of using a single cloud, which is optimal and seems to be a reasonable cost, given the benefits. <s> BIB015 </s> Systematic Literature Review (SLR) of Resource Scheduling and Security in Cloud Computing <s> IX. DISCUSSION <s> Cloud computing offers computational resources such as processing, networking, and storage to customers. However, the cloud also brings with it security concerns which affect both cloud consumers and providers. The Cloud Security Alliance (CSA) define the security concerns as the seven main threats. This paper investigates how threat number one (malicious activities performed in consumers' virtual machines/VMs) can affect the security of both consumers and providers. It proposes logging solutions to mitigate risks associated with this threat. We systematically design and implement a prototype of the proposed logging solutions in an IaaS to record the history of customer VM's files. The proposed system can be modified in order to record VMs' process behaviour log files. These log files can assist in identifying malicious activities (spamming) performed in the VMs as an example of how the proposed solutions benefits the provider side. The proposed system can record the log files while having a smaller trusted computing base compared to previous work. Thus, the logging solutions in this paper can assist in mitigating risks associated with the CSA threats to benefit consumers and providers. <s> BIB016 </s> Systematic Literature Review (SLR) of Resource Scheduling and Security in Cloud Computing <s> IX. DISCUSSION <s> Cloud computing is an emerging technology. It process huge amount of data so scheduling mechanism works as a vital role in the cloud computing. Thus my protocol is designed to minimize the switching time, improve the resource utilization and also improve the server performance and throughput. This method or protocol is based on scheduling the jobs in the cloud and to solve the drawbacks in the existing protocols. Here we assign the priority to the job which gives better performance to the computer and try my best to minimize the waiting time and switching time. Best effort has been made to manage the scheduling of jobs for solving drawbacks of existing protocols and also improvise the efficiency and throughput of the server. <s> BIB017
|
This section discusses the finding of SLR, then it presents the proposed outcome. This section discusses recent related approaches in the area of cloud security such as Data storage approaches that are related to Data as a Service (DaaS) data storage moving from a single cloud to a multi-cloud, and security models. It also provides some approaches in resource management that used static and dynamic methods focusing on performance. A review of recent cloud models has been performed to get an overview of the models categories shown in Table IV . Models have been classified to categories related to the main focus of the approaches including Data as a service (DaaS), Infrastructure as a Service (IaaS) and cloud storage. The DaaS models focus on all data security and different from cloud storage which is concerned about data centre security. The IaaS models focus on the infrastructure security. Table I shows some issues have less attention than others such as Authentication, Accountability, Intrusion, and Reliability. The most focused areas are Integrity, Availability, and Security. Most approaches are related to cloud storage and DaaS which make IaaS need more work especially in security. The DepSky system BIB015 addresses the availability and the confidentiality of data in their storage system by using multi-cloud providers, combining Byzantine quorum system protocols, cryptographic secret sharing and erasure codes. Whereas NetDB2-MS BIB011 is a Model to ensure privacy level in DaaS based on data distributed to different service providers and to employ Shamir's seceret algorithm . The BlueSky System BIB012 has extended the DepSky system to be more reliable and deal with large storage volume from a cloud provider and to avoid a dedicated hardware server. Similarly, the SafeStore system is more focused on availability not on performance and cost which is quite different than the other systems. Other approaches like HAIL BIB002 , ICStore , SPORC BIB005 , Depot BIB010 , Data storage Models BIB013 , BIB006 have focused on cloud storage including some data security aspects such as security and data integrity and data confidentiality. They also have similar limitations such as data intrusion and availability. There are some models at the deployment level which deal with the security risks but with limitations in confidentiality and integrity such as Separation Model -Migration Model -Availability Model -Tunnel Model -Cryptography Model BIB007 . Data privacy is still a big concern in other Models like Jerico Formu's Cloud Cube Model BIB008 , Hexagon model BIB008 , Multi-Tenancy Cloud Model , Cloud Risk Accumulation Model , and the Mapping Model BIB008 , . The logging approach BIB016 ensures that the log files can mitigate the risks to benefit both sides of accountability, security, performance, and scalability. Other work in scheduling such as BIB017 takes into consideration tasks priorities then assign them to be executed over the allocated resources. If there more than one task for each resource, it will be scheduled with different methods depending on what is better for each resource. Then it will use parallel running for all tasks. This work assigned dependent tasks first to run first then non dependent one that to minimise the deadlock situation. Table V shows some approaches that related to resource management used static and dynamic methods and focusing on performance. Approaches by Li et al. BIB003 and Yazir et al. BIB009 relate to resource scheduling using static and dynamic mechanism but they did not include any security factors to avoid any security risks. Static scheduling mechanism such as the approach introduced by Jiayin et al. offers a static scheduling solution to improve service performance over virtual machines. The tasks are executed on certain cloud resources based on the static resource allocation. It aims to regulate many resources utilisation of service level objective of applications SLOs. Also, in propose an algorithm that adjust resource www.ijacsa.thesai.org allocation based on updating the actual task executions which helps to recalculate the finishing time that assigned to the cloud. Walsh et al. BIB001 proposed a utility function as a solution by dividing the architecture into two-layers (local and global). The local layer is responsible for calculating resource allocation dynamically. Whereas, the global layer computes the near optimal configuration of allocating resources based on results provided by the local layer, and to fix the load balancing with the server cluster which also helps applications scalability. Other approaches that use dynamic mechanism such as Yazir et al. BIB009 and Slegers et al. BIB004 include a comparison of static and four heuristic dynamic policies. They showed some differences and presented benefits and weaknesses of using each type in terms of using and managing cloud resources. A price model was introduced by Sharma et al. BIB014 for dynamic resource management and low cost of cloud service but they did not include the security factor and indicate saving cost on physical resources and maintenance cost as limitations of their model. As a result of this SLR, the proposed model by Sheikh et al. has considered the over all security discussed by Watson to develop Scheduling Security Model (SSM) to address the issues found in other approaches by this SLR such as security and cost.
|
A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> I. INTRODUCTION <s> Preface 1. Decision-Theoretic Foundations 1.1 Game Theory, Rationality, and Intelligence 1.2 Basic Concepts of Decision Theory 1.3 Axioms 1.4 The Expected-Utility Maximization Theorem 1.5 Equivalent Representations 1.6 Bayesian Conditional-Probability Systems 1.7 Limitations of the Bayesian Model 1.8 Domination 1.9 Proofs of the Domination Theorems Exercises 2. Basic Models 2.1 Games in Extensive Form 2.2 Strategic Form and the Normal Representation 2.3 Equivalence of Strategic-Form Games 2.4 Reduced Normal Representations 2.5 Elimination of Dominated Strategies 2.6 Multiagent Representations 2.7 Common Knowledge 2.8 Bayesian Games 2.9 Modeling Games with Incomplete Information Exercises 3. Equilibria of Strategic-Form Games 3.1 Domination and Ratonalizability 3.2 Nash Equilibrium 3.3 Computing Nash Equilibria 3.4 Significance of Nash Equilibria 3.5 The Focal-Point Effect 3.6 The Decision-Analytic Approach to Games 3.7 Evolution. Resistance. and Risk Dominance 3.8 Two-Person Zero-Sum Games 3.9 Bayesian Equilibria 3.10 Purification of Randomized Strategies in Equilibria 3.11 Auctions 3.12 Proof of Existence of Equilibrium 3.13 Infinite Strategy Sets Exercises 4. Sequential Equilibria of Extensive-Form Games 4.1 Mixed Strategies and Behavioral Strategies 4.2 Equilibria in Behavioral Strategies 4.3 Sequential Rationality at Information States with Positive Probability 4.4 Consistent Beliefs and Sequential Rationality at All Information States 4.5 Computing Sequential Equilibria 4.6 Subgame-Perfect Equilibria 4.7 Games with Perfect Information 4.8 Adding Chance Events with Small Probability 4.9 Forward Induction 4.10 Voting and Binary Agendas 4.11 Technical Proofs Exercises 5. Refinements of Equilibrium in Strategic Form 5.1 Introduction 5.2 Perfect Equilibria 5.3 Existence of Perfect and Sequential Equilibria 5.4 Proper Equilibria 5.5 Persistent Equilibria 5.6 Stable Sets 01 Equilibria 5.7 Generic Properties 5.8 Conclusions Exercises 6. Games with Communication 6.1 Contracts and Correlated Strategies 6.2 Correlated Equilibria 6.3 Bayesian Games with Communication 6.4 Bayesian Collective-Choice Problems and Bayesian Bargaining Problems 6.5 Trading Problems with Linear Utility 6.6 General Participation Constraints for Bayesian Games with Contracts 6.7 Sender-Receiver Games 6.8 Acceptable and Predominant Correlated Equilibria 6.9 Communication in Extensive-Form and Multistage Games Exercises Bibliographic Note 7. Repeated Games 7.1 The Repeated Prisoners Dilemma 7.2 A General Model of Repeated Garnet 7.3 Stationary Equilibria of Repeated Games with Complete State Information and Discounting 7.4 Repeated Games with Standard Information: Examples 7.5 General Feasibility Theorems for Standard Repeated Games 7.6 Finitely Repeated Games and the Role of Initial Doubt 7.7 Imperfect Observability of Moves 7.8 Repeated Wines in Large Decentralized Groups 7.9 Repeated Games with Incomplete Information 7.10 Continuous Time 7.11 Evolutionary Simulation of Repeated Games Exercises 8. Bargaining and Cooperation in Two-Person Games 8.1 Noncooperative Foundations of Cooperative Game Theory 8.2 Two-Person Bargaining Problems and the Nash Bargaining Solution 8.3 Interpersonal Comparisons of Weighted Utility 8.4 Transferable Utility 8.5 Rational Threats 8.6 Other Bargaining Solutions 8.7 An Alternating-Offer Bargaining Game 8.8 An Alternating-Offer Game with Incomplete Information 8.9 A Discrete Alternating-Offer Game 8.10 Renegotiation Exercises 9. Coalitions in Cooperative Games 9.1 Introduction to Coalitional Analysis 9.2 Characteristic Functions with Transferable Utility 9.3 The Core 9.4 The Shapkey Value 9.5 Values with Cooperation Structures 9.6 Other Solution Concepts 9.7 Colational Games with Nontransferable Utility 9.8 Cores without Transferable Utility 9.9 Values without Transferable Utility Exercises Bibliographic Note 10. Cooperation under Uncertainty 10.1 Introduction 10.2 Concepts of Efficiency 10.3 An Example 10.4 Ex Post Inefficiency and Subsequent Oilers 10.5 Computing Incentive-Efficient Mechanisms 10.6 Inscrutability and Durability 10.7 Mechanism Selection by an Informed Principal 10.8 Neutral Bargaining Solutions 10.9 Dynamic Matching Processes with Incomplete Information Exercises Bibliography Index <s> BIB001 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> I. INTRODUCTION <s> A multi-person discrete game where the payoff after each play is stochastic is considered. The distribution of the random payoff is unknown to the players and further none of the players know the strategies or the actual moves of other players. A learning algorithm for the game based on a decentralized team of learning automata is presented. It is proved that all stable stationary points of the algorithm are Nash equilibria for the game. Two special cases of the game are also discussed, namely, game with common payoff and the relaxation labelling problem. The former has applications such as pattern recognition and the latter is a problem widely studied in computer vision. For the two special cases it is shown that the algorithm always converges to a desirable solution. > <s> BIB002 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> I. INTRODUCTION <s> We investigate the problem of achieving global optimization for distributed channel selections in cognitive radio networks (CRNs), using game theoretic solutions. To cope with the lack of centralized control and local influences, we propose two special cases of local interaction game to study this problem. The first is local altruistic game, in which each user considers the payoffs of itself as well as its neighbors rather than considering itself only. The second is local congestion game, in which each user minimizes the number of competing neighbors. It is shown that with the proposed games, global optimization is achieved with local information. Specifically, the local altruistic game maximizes the network throughput and the local congestion game minimizes the network collision level. Also, the concurrent spatial adaptive play (C-SAP), which is an extension of the existing spatial adaptive play (SAP), is proposed to achieve the global optimum both autonomously as well as rapidly. <s> BIB003 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> I. INTRODUCTION <s> In order to cater for the overwhelming growth in bandwidth demand from mobile Internet users operators have started to deploy different, overlapping radio access network technologies. One important challenge in such a heterogeneous wireless environment is to enable network selection mechanisms in order to keep the mobile users Always Best Connected (ABC) anywhere and anytime. Game theory techniques have been receiving growing attention in recent years as they can be adopted in order to model and understand competitive and cooperative scenarios between rational decision makers. This paper presents an overview of the network selection decision problem and challenges, a comprehensive classification of related game theoretic approaches and a discussion on the application of game theory to the network selection problem faced by the next generation of 4G wireless networks. <s> BIB004 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> I. INTRODUCTION <s> Opportunistic spectrum access (OSA) has been regarded as the most promising approach to solve the paradox between spectrum scarcity and waste. Intelligent decision making is key to OSA and differentiates it from previous wireless technologies. In this article, a survey of decision-theoretic solutions for channel selection and access strategies for OSA system is presented. We analyze the challenges facing OSA systems globally, which mainly include interactions among multiple users, dynamic spectrum opportunity, tradeoff between sequential sensing cost and expected reward, and tradeoff between exploitation and exploration in the absence of prior statistical information. We provide comprehensive review and comparison of each kind of existing decision-theoretic solution, i.e., game models, Markovian decision process, optimal stopping problem and multi-armed bandit problem. We analyze their strengths and limitations and outline further research for both technical contents and methodologies. In particular, these solutions are critically analyzed in terms of information, cost and convergence speed, which are key concerns for practical implementation. Moreover, it is noted that each kind of existing decision-theoretic solution mainly addresses one aspect of the challenges, which implies that two or more kinds of decision-theoretic solutions should be incorporated to address more challenges simultaneously. <s> BIB005 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> I. INTRODUCTION <s> The handling of the number of objects that will be part of the Internet of Things (IoT) and the various networking technologies used for their interconnection requires suitable architecture and technological foundations. Despite significant work on architectures and test facilities for the IoT, there is still a lack of management functionality to overcome the technological heterogeneity and complexity of the underlying networks and IoT infrastructure, so as to enhance context/situational-awareness, reliability, and energy-efficiency of IoT applications. This article presents a cognitive management framework for the IoT aiming to address these issues, comprising three levels of functionality: virtual objects (VOs), composite VOs (CVOs), and service levels. Cognitive entities at all levels provide the means for self-management (configuration, optimization, and healing) and learning. Three fundamental processes of this framework are presented: dynamic CVO creation, knowledge-based CVO instantiation, and CVO self-healing. A first prototype implementation of this framework and corresponding derived results are presented. <s> BIB006 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> I. INTRODUCTION <s> The Internet of Things (IoT) is expected to substantially support sustainable development of future smart cities. This article identifies the main issues that may prevent IoT from playing this crucial role, such as the heterogeneity among connected objects and the unreliable nature of associated services. To solve these issues, a cognitive management framework for IoT is proposed, in which dynamically changing real-world objects are represented in a virtualized environment, and where cognition and proximity are used to select the most relevant objects for the purpose of an application in an intelligent and autonomic way. Part of the framework is instantiated in terms of building blocks and demonstrated through a smart city scenario that horizontally spans several application domains. This preliminary proof of concept reveals the high potential that self-reconfigurable IoT can achieve in the context of smart cities. <s> BIB007 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> I. INTRODUCTION <s> Current research on Internet of Things (IoT) mainly focuses on how to enable general objects to see, hear, and smell the physical world for themselves, and make them connected to share the observations. In this paper, we argue that only connected is not enough, beyond that, general objects should have the capability to learn, think, and understand both physical and social worlds by themselves. This practical need impels us to develop a new paradigm, named cognitive Internet of Things (CIoT), to empower the current IoT with a “brain” for high-level intelligence. Specifically, we first present a comprehensive definition for CIoT, primarily inspired by the effectiveness of human cognition. Then, we propose an operational framework of CIoT, which mainly characterizes the interactions among five fundamental cognitive tasks: perception-action cycle, massive data analytics, semantic derivation and knowledge discovery, intelligent decision-making, and on-demand service provisioning. Furthermore, we provide a systematic tutorial on key enabling techniques involved in the cognitive tasks. In addition, we also discuss the design of proper performance metrics on evaluating the enabling techniques. Last but not the least, we present the research challenges and open issues ahead. Building on the present work and potentially fruitful future studies, CIoT has the capability to bridge the physical world (with objects, resources, etc.) and the social world (with human demand, social behavior, etc.), and enhance smart resource allocation, automatic network operation, and intelligent service provisioning. <s> BIB008 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> I. INTRODUCTION <s> IoT (Internet of Things) is increasingly becoming more popular mainly due to the fact that almost all the smart devices nowadays are network enabled to facilitate many current and emerging applications. However, some important issues still need to be addressed before fully realizing the potential of IoT applications. One of the most important issues is to have effective approaches to planning various device actions to satisfy user requirements efficiently and securely in mobile IoT applications. A mobile IoT application can be composed of mobile cloud systems and devices, such as wearable devices, smart phones and smart cars. In this type of systems, mobile networks with elastic resources from various mobile clouds are effective to support IoT applications. In this paper an effective approach to intelligent planning for mobile IoT applications is presented. This approach includes a learning technique for dynamically assessing the users' mobile IoT application and a MDP (Markov Decision Process) planning technique for enhancing efficiency of IoT device action planning. Simulation results are presented to show the effectiveness of our approach. <s> BIB009 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> I. INTRODUCTION <s> Machine-to-machine (M2M) communications enables networked devices to exchange information among each other as well as with business application servers and therefore creates what is known as the Internet-of-Things (IoT). The research community has a consensus for the need of a standardized protocol stack for M2M communications. On the other hand, cognitive radio technology is very promising for M2M communications due to a number of factors. It is expected that cognitive M2M communications will be indispensable in order to realize the vision of IoT. However cognitive M2M communications requires a cognitive radio-enabled protocol stack in addition to the fundamental requirements of energy efficiency, reliability, and Internet connectivity. The main objective of this paper is to provide the state of the art in cognitive M2M communications from a protocol stack perspective. This paper covers the emerging standardization efforts and the latest developments on protocols for cognitive M2M networks. In addition, this paper also presents the authors’ recent work in this area, which includes a centralized cognitive medium access control (MAC) protocol, a distributed cognitive MAC protocol, and a specially designed routing protocol for cognitive M2M networks. These protocols explicitly account for the peculiarities of cognitive radio environments. Performance evaluation demonstrates that the proposed protocols not only ensure protection to the primary users (PUs) but also fulfil the utility requirements of the secondary M2M networks. <s> BIB010 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> I. INTRODUCTION <s> The Internet has become an evolving entity, growing in importance and creating new value through its expansion and added utilization. The Internet of Things (IoT) is a new concept associated with the future Internet and has recently become popular in a dynamic and global network infrastructure. However, in an IoT implementation, it is difficult to satisfy different Quality of Service (QoS) requirements and achieve rapid service composition and deployment. In this paper, we propose a new QoS control scheme for IoT systems. Based on the Markov game model, the proposed scheme can effectively allocate IoT resources while maximizing system performance. In multiagent environments, a game theory approach can provide an effective decision-making framework for resource allocation problems. To verify the results of our study, we perform a simulation and confirm that the proposed scheme can achieve considerably improved system performance compared to existing schemes. <s> BIB011 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> I. INTRODUCTION <s> Wireless sensor networks (WSNs) consist of autonomous and resource-limited devices. The devices cooperate to monitor one or more physical phenomena within an area of interest. WSNs operate as stochastic systems because of randomness in the monitored environments. For long service time and low maintenance cost, WSNs require adaptive and robust methods to address data exchange, topology formulation, resource and power optimization, sensing coverage and object detection, and security challenges. In these problems, sensor nodes are used to make optimized decisions from a set of accessible strategies to achieve design goals. This survey reviews numerous applications of the Markov decision process (MDP) framework, a powerful decision-making tool to develop adaptive algorithms and protocols for WSNs. Furthermore, various solution methods are discussed and compared to serve as a guide for using MDPs in WSNs. <s> BIB012 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> I. INTRODUCTION <s> In this article, we investigate self-organizing optimization for cognitive small cells (CSCs), which have the ability to sense the environment, learn from historical information, make intelligent decisions, and adjust their operational parameters. By exploring the inherent features, some fundamental challenges for self-organizing optimization in CSCs are presented and discussed. Specifically, the dense and random deployment of CSCs brings about some new challenges in terms of scalability and adaptation; furthermore, the uncertain, dynamic, and incomplete information constraints also impose some new challenges in terms of convergence and robustness. For providing better service to users and improving resource utilization, four requirements for self-organizing optimization in CSCs are presented and discussed. Following the attractive fact that the decisions in game-theoretic models are exactly coincident with those in self-organizing optimization (i.e., distributed and autonomous), we establish a framework of gametheoretic solutions for self-organizing optimization in CSCs and propose some featured game models. Specifically, their basic models are presented, some examples are discussed, and future research directions are given. <s> BIB013 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> I. INTRODUCTION <s> With rapid development of the Internet of Things (IoT), various machine-to-machine communications technologies have emerged in recent years to provide ubiquitous wireless connections for a massive number of IoT devices. This poses significant challenges to network control and management of large-scale IoT networks. Software-defined networking (SDN) is considered a promising technology to streamline network management due to dynamic reconfigurable network elements. Thus, the integration of SDN and IoT provides a potentially feasible solution to strengthening management and control capabilities of the IoT network. Benefit from the SDN technology, resource utilization in the IoT network can be further enhanced. In this paper, we first propose a software-defined network architecture for IoT. Then, the resource allocation problem in the proposed SDN-based IoT network is investigated. The optimal problem of maximizing the expected average rewards of the network is formulated as a semi-Markov decision process (SMDP). The optimal solution is obtained through solving the SMDP problem using a relative value iteration algorithm. Simulation results demonstrate that the proposed resource allocation scheme is able to improve the system rewards compared with other comparative resource allocation schemes. <s> BIB014 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> I. INTRODUCTION <s> The Internet of Things (IoT) represents the next significant step in the evolution of the Internet and software development. Although most IoT research focuses on data acquisition, analytics, and visualization, a subtler but equally important transition is underway. Hardware advances are making it possible to embed fully fledged virtual machines and dynamic language runtimes virtually everywhere, leading to a Programmable World in which all our everyday things are connected and programmable dynamically. The emergence of millions of remotely programmable devices in our surroundings will pose significant software development challenges. A roadmap from today's cloud-centric, data-centric IoT systems to the Programmable World highlights the technical challenges that deserve to be part of developer education and deserve deeper investigation beyond those IoT topics that receive the most attention today. <s> BIB015 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> I. INTRODUCTION <s> Recent research and technology trends are shifting toward IoT and CRNs. However, we think that the things-oriented, Internet-oriented, and semantic-oriented versions of IoT are meaningless if IoT objects are not equipped with cognitive radio capability. Equipping IoT objects with CR capability has lead to a new research dimension of CR-based IoT. In this article, we present an overview of CR-based IoT systems. We highlight potential applications of CR-based IoT systems. We survey architectures and frameworks of CR-based IoT systems. We furthermore discuss spectrum-related functionalities for CR-based IoT systems. Finally, we present open issues, research challenges, and future direction for these CR-based IoT networks. <s> BIB016 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> I. INTRODUCTION <s> In the present scenario, performance evaluation of employees in industries is done manually, in which there are ample chances of biases. It is observed that manual employee evaluation systems can be efficiently eliminated by using ubiquitous sensing capabilities of Internet of things (IoT) devices to monitor industrial employees. However, none of the authors have used IoT data for automating performance evaluation systems of employees. Hence, this paper proposes a game theoretic approach for an IoT-based employee performance evaluation in industry. The system infers useful results about the performance of employees by mining data collected by the sensory nodes using the MapReduce model. The information hence obtained is then used to draw automated decisions for employees using game theory. The system is analyzed both experimentally and mathematically. The experimental evaluation compares the proposed system with other techniques of data mining and decision making. The results depict that the proposed system evaluates the performance of employees efficiently and shows a performance improvement over other techniques. The mathematical evaluation shows that correct evaluation of employees by the system effectively motivates employees in favor of the industry. Thus, the proposed system effectively and efficiently automates the employee evaluation system and decision-making process in the industry. <s> BIB017 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> I. INTRODUCTION <s> Summary ::: Smart traffic light control at intersections is 1 of the major issues in Intelligent Transportation System. In this paper, on the basis of the new emerging technologies of Internet of Things, we introduce a new approach for smart traffic light control at intersection. In particular, we firstly propose a connected intersection system where every objects such as vehicles, sensors, and traffic lights will be connected and sharing information to one another. By this way, the controller is able to collect effectively and mobility traffic flow at intersection in real-time. Secondly, we propose the optimization algorithms for traffic lights by applying algorithmic game theory. Specially, 2 game models (which are Cournot Model and Stackelberg Model) are proposed to deal with difference scenarios of traffic flow. In this regard, based on the density of vehicles, controller will make real-time decisions for the time durations of traffic lights to optimize traffic flow. To evaluate our approach, we have used Netlogo simulator, an agent-based modeling environment for designing and implementing a simple working traffic. The simulation results shows that our approach achieves potential performance with various situations of traffic flow. <s> BIB018 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> I. INTRODUCTION <s> In modern times, it has been observed that Internet of things technology makes it possible for connecting various smart objects together through the Internet. For the effective Internet of things m... <s> BIB019 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> I. INTRODUCTION <s> Innovative growth of IoT Technology has enhanced the service delivery aspects of defence sector in terms of high-tech surveillance, and reliable defence mechanisms. Along with the sensing capability for ubiquitous events, IoT Technology provides means to deliver services in time sensitive and information intensive manner. In this paper, a framework for IoT based activity monitoring of defence personnel is presented to detect the precursors of suspiciousness in terms of information outflow that can compromise the national security. Though maintaining intellectual defence personnel remained a major area of concern for every nation, still investigating reports of recent terrorist attacks in different countries have discovered the number of suspicion factors from their daily activities. The work presented in this study focuses on these factors in terms of efficient monitoring of social activities and analyzing it over suspicious scale. Moreover, Suspicious Index (SI) is defined for every personnel on the basis of their activities that can compromise national security directly or indirectly. Furthermore, automated game theoretic decision making model is presented to aid the monitoring officials in suppressing the probability of information outflow. In order to validate the system, two types of evaluations are performed. In one case, an imitative environment is considered to monitor 10 college students’ daily engagements for 7 days. The results are compared with the state-of-the-art techniques of data assessment. In the second case, a mathematical evaluation for the game theoretic decision making is performed. Results in both cases show that the proposed model achieves better performance in efficient monitoring of suspicious activities and effective decision making. <s> BIB020 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> I. INTRODUCTION <s> As a result of rapid advancement in communication technologies, the Internet of Things (i.e., ubiquitous connectivity among a very large number of persons and physical objects) is now becoming a reality. Nonetheless, a variety of challenges remain to be addressed, one of them being the efficient resource management in IoT. On one hand, central resource allocation is infeasible for large numbers of entities, due to excessive computational cost as well as immoderate overhead required for information acquisition. On the other hand, the devices connecting to IoT are expected to act smart, making decisions and performing tasks without human intervention. These characteristics render distributed resource management an essential feature of future IoT. Traditionally, game theory is applied to effectively analyze the interactive decision making of agents with conflicting interests. Nevertheless, conventional game models are not adequate to model large-scale systems, since they suffer from many shortcomings including analytical complexity, slow convergence, and excessive overhead due to information acquisition/exchange. In this article, we explore some non-conventional game theoretic models that fit the inherent characteristics of future large-scale IoT systems. Specifically, we discuss evolutionary games, mean field games, minority games, mean field bandit games, and mean field auctions. We provide the basics of each of these game models and discuss the potential IoT-related resource management problems that can be solved by using these models. We also discuss challenges, pitfalls, and future research directions. <s> BIB021
|
The Internet of Things (IoT) defines an emerging paradigm of cyber physical systems, in which billions of interconnected smart objects collect, analyze and exchange vast information from all over the world. Today, IoT not only functions as the substantial choice for computing and communication paradigm but also provides rich set of services in smart cities, homes, transportation and environmental monitoring BIB015 . Recently, the deployment of cognitive computing in IoT has gained eminent interest, where intelligence is infused into smart objects to learn a lot from the physical world. Such paradigm is called as cognitive Internet of Things (CIoT) BIB005 . According to the latest surveys, approximately 500 billion devices will be connected to the Internet by 2020 and we need a concrete framework of IoT which can easily fulfill the future requirements . CIoT is extension of IoT paradigm which is equipped with cognitive abilities to enhance performance and achieve intelligence. Recently, IoT European Research Cluster (IERC) has published a detailed paper on IoT current and future road-map for development until 2015 and beyond 2020 . The rapid development in the field of IoT in the past few years enables the smart devices to provide seamless connectivity among the objects by using the intelligent services and applications. However, the applications of IoT are still not intelligent enough to perform decision making and are dependent on human beings for cognition processing. Still, the framework of IoT is not equipped with the brain to do decision on its own. CIoT is termed as IoT paradigm unified with cognitive abilities for managing inter-operation via decision making among heterogeneous smart objects BIB010 . Therefore, Wu et al. introduced the concept of Cognitive Internet of Things BIB008 in which the general objects work like agents and interact with the physical environment with minimal human interaction. In this paper our main focus is to equip the existing IoT framework with human cognition which is capable of performing the intelligent decision making independently. In short, we have embedded human intelligence in the framework by adding the intelligent decision making layer in the system design for intelligent decision making. The addition of of this intelligent layer in the existing framework have several advantages including increasing the resource efficiency, saving human time and efforts, intelligence decision making, enhance service provision, self-organizing, optimization etc. to name a few. . The literature clearly showed that many projects had been done in IoT, however, the research in the field of cognitive IoT is still under development phase and requires a lot of work from the research community for its practical implementation. The most vital aspect of handling the heterogeneous objects can be managed by utlizing their semantics and ontology for the virtualization of the these objects in current architecture of CIoT BIB008 . A key challenge for CIoT is to overcome the heterogeneity of dissimilar objects in terms of their features and the network technologies for their interconnection BIB006 . In BIB006 , Foteinos et al. presented the three layered IoT architecture for enabling the autonomous application and the reuse of objects across various domains. Moreover, IoT European Research Cluster (IERC) have initiated several projects on IoT that include research on current framework and semantic interoperability. IERC research cluster has provided a comprehensive insight on several projects of IoT like IoT-A, IoT-I, OPENIoT, i-Core, PROBE-IT, BUTLER. IoT@Work, IoT.EST, GAMBAS, COIN, IoT6, SmartAgrifood, CONNECT and ComVantage which are summarized in Table 1 . The SENSEI project integrates the physical world with digital world by dividing it into three different abstractions which include resources, entities, and resource users, for addressing a large number of widely distributed wireless sensors and actuators. The key take-away is to propose the new models to enhance ontology, domain knowledge and decision making techniques based on application requirements. In Vermesan et al. provides an indepth detail about the conceptual framework of IoT, technological trends, IoT applications and technology enablers (intelligence, communication, integration, semantic technologies etc. to name a few). Basically, the authors highlight the current research agenda, timelines and priorities in IoT that include identification technology, IoT architecture, communication technology, network technology, software, services, hardware, discovery and search engine technologies, power and storage technologies, security and privacy, and standardization in detail. The authors also provide the guidelines for future technological development in IoT and also highlight the issues including IoT standardization, Ontology based semantic standards, spectrum energy communication protocols standards, standards for communication within and outside cloud, International quality/integrity standards for data creation, data traceability and decision making in IoT. Another recent work by Vermesan and Friess elaborates the current IoT advancements and also suggests the future roadmap for IoT in detail. The paper highlights the IoT strategic research and innovation agenda that includes development of smart-X applications, IoT related future technologies (clod computing and semantic technologies), networks and communication, processes, data management, security and privacy, device level energy issues, IoT related standardization and IoT protocol convergence for future development. Finally, the author highlights the current projects on IoT including Open IoT, iCORE, Compose, SmartSantander, Fitman, OSMOSE and CALIPSO project in details with their results and future opportunities for IoT. Most recently in BIB016 , Khan et al. have provided the comprehensive overview of cognitive radio based IoT system. In this paper, they have presented a brief overview of cognitive radio (CR) based IoT system and suggested that system is a viable solution for effective and efficient utilization of spectrum resources even in the presence of primary users (PUs). Moreover, CR technology also equipped the objects in IoT with intelligence in order to learn, think and make decisions of both physical and social worlds. They have also highlighted the potential applications of CR based IoT systems. This paper has also presented a comprehensive survey about different architectures and framework along with the functionality of each layer in the framework of CIoT systems. They have also discussed about spectrum related functionalities and opportunities for CIoT systems. The authors explain about traditional three layer architecture and functionalities related to each layer but are unable to explain about intelligent decision making in their work. Moreover, they briefly explain about VO and CVO creation but do not provide any details about objects reuse in the framework. Finally, the intelligent decision making which is the heart of CIoT system is not discussed in this work. They have also highlighted that literature on IoT has presented many frameworks for IoT, but these lack motivations and necessity of standardization. Although, the efficient spectrum utilization using CR based concepts is explored but the implementation of decision making models (game theory, markovian decision process etc. to name a few.) is not highlighted in their work. Therefore, the need of the hour is to utilize the game theoretic decision making model for cognitive decision making about object reuse and efficient spectrum resources utilization in CIoT. The previous literature regarding usage of game theoretic approach for an IoT-based employee performance evaluation in industry BIB017 . They have evaluated the performance of employees by mining data collected by the sensory nodes using the MapReduce model. This information is then used to draw automated decisions for employees using game theory. In BIB018 , Bui et al. have proposed the game theoretic based real time decision making approach for IoT based traffic light control system. They have proposed the connected intersection system consisting of IoT devices and then used the game theory algorithms including Cournot Model and Stackelberg Model for intelligent decision making. In BIB019 , Kim has proposed a new quality-of-service management scheme based on the IoT system power control algorithm which utilizes the concept of game theoretic for power allocation. Basically, they have utilized the R-learning algorithm and docitive paradigm in which the system agents can teach other agents how to adjust their power levels while reducing computation complexity and speeding up the learning process. In BIB020 , Bhatia and Sood presented the solution of decision making in IoT assisted activity of defense personals. They have implemented the game theoretic based automated models to aid monitoring personals for efficient monitoring of social activities and analyzing it over suspicious scale. In BIB021 , Semasinghe et al. presented the game theoretic mechanism for resource management in wireless IoT systems. They have utilized the game theory models including evolutionary game, bargain game mean field game and mean field auction game for the management of IoT-related resources in large-scale systems. The previous studies also show that Markov decision process is used for intelligent decision making. In BIB011 , Kim have presented the QoS control scheme based on Markov game model which can effectively allocate IoT resources while maximizing system performance in IoT systems. Moreover, this distributed proposed system provides step-by-step feedback process and enables adaptability and responsiveness to current IoT system conditions. In BIB009 , Yau and Buduru proposed an intelligent planning technique for mobile IoT applications based on Markov decision process which can enhance efficiency of IoT device action planning. Moreover, it also enables mobile networks with elastic resources from various mobile clouds which are effective in supporting IoT applications. In , Alam et al. proposed an integrated reinforcement learning approach based on genetic algorithm for device and application aware SLA maintenance and management in IoT environment. This approach enables the automatic management of IoT devices with the help of genetic algorithm reinforcement learning. In BIB014 , Xiong et al. addressed the resource allocation problem in the proposed SDN-based IoT network and used the semi-Markov decision process (SMDP) to maximize the expected average rewards of the network. Moreover, they have proposed the optimal solution by using the SMDP problem in a relative value iteration algorithm. In BIB012 , Alsheikh et al. have presented a comprehensive survey on Markov decision process for wireless sensor network. The paper also highlighted the designs of the Markov decision process in wireless sensor networks including data exchange and topology formation, resource and power optimization, area coverage and event tracking solutions, and security and intrusion detection methods. Early attempts on CIoT highlight the cognitive processes consisting of a three-layered ring inclusive of virtual object (VO) layer, composite virtual object (CVO) layer and service layers of IoT for service provisioning BIB008 . The work mentioned in BIB008 proposed context gathering and identifying intelligent devices as real world objects (RWOs) and subsequently sending the appropriate information or object to the Internet via gateway from any location at any time. These RWOs are represented as VO. These smart virtual objects are capable of hiding the functional and implementation details from their recipients. The proposed CIoT framework generally combines many VOs to establish CVO, which provides services to end-users as well as higher hierarchy applications BIB007 . CVO is a smart object which contains all the semantically portable information about the objects including their virtual object creation, functions and parameters, user-centric services, identifiers etc. to name a few BIB003 . The combination of CVOs opens up a new opportunity in managing dynamic objects in CIoT. The most challenging goal here is to efficiently utilize the resources by reusing the existing CVOs and combining them into intelligent representations which will minimize their time of creation and management for intelligent decision making. Numerous questions remain elusive as how to make cognitive vision of IoT as a reality. For instance, how much cognitive ability can we push to the IoT without risking the service provision of IoT applications? What developments are needed in order to ensure robust and accurate decision making of cognitive context with IoT? Cognitive Radio Network (CRN) is a promising paradigm that optimizes radio spectrum utilization and throughput BIB005 , has ability to perceptively perform decision based on historical information, which shares similar potential of deployment for CIoT. Similarly, a few attempts have been made in the past to present the deeper viewpoint on those mentioned decision-theoretic models for CRN as above BIB005 , BIB013 . The previous literature showed that game models are best for capturing the interactions among several players. The work done by Y. Xu et al., aims to provide a detailed work on decision making in cognitive radio network by the selection of appropriate channel among several channels for opportunistic spectrum access in multi-agent decision theoretic model. They proposed a complete framework for analysing, learning and evaluating the process of object selection in cognitive radio network. In BIB013 , Xu et al. discussed the game theoretic perspective of self organization and optimization for cognitive CRN. Moreover, the intelligent decision making for small cells in CRn is also explored in detail. In BIB003 , Xu et al. highlighted the opportunistic spectrum access in CRN for achieving global optimization using the local interaction games. They proposed a localized selfless game in which each player tried to maximize his utility function and collection of utilities of its neighbours was proposed to achieve global optimization via local information exchange for cognitive decision making in CRN. Another work proposed in BIB004 also proposed the game theory based network selection in CRN. Finally, the most recent literature has proposed the usage of CRN techniques for IoT based system to manage the shortage of spectrum for IoT devices. They have used different game model including inspection game, bargain game, hierarchical game models etc. to name a few according to different scenarios for intelligent decision making of channel selection in IoT. In this paper, we provide a brief overview of current technological advancements in IoT and also suggest how to utilize the current work on object semantic, ontology, interoperability, communication, management, integration and management in our proposed novel architecture of CIoT. The paper also provides the insight knowledge of basic game models which are useful to analyze and model the user interaction with CIoT system. The most suitable game models for CIoT include repeated games, graphical game, evolutionary game, hierarchical games, coalition game and bayesian game. These game models provide guidelines for designing and modeling the non-cooperative game models for better learning and self-organizing. In the non-cooperative game model, the players (object) are autonomous and make rational decisions in order to maximize their individual utility functions. In these models, the most commonly used solution concept parameters are Nash equilibrium (NE) and correlated equilibrium BIB013 . Moreover, the decisions of players are distributed and autonomous which lead to self-organizing optimization. Finally, the paper also highlights the behaviour update rule which is a fundamental element for practical implementation of game model. These proposed solutions are inherited from CRN which can be deployed on multi-agent systems using game theory specifically by implementing hybrid or multiple learning approaches for efficient decision making in CIoT. The most challenging issue of unpredictable, dynamic and incomplete information could be solved by learning derivation as well as knowledge model extraction. For example in game theory, the commonly deployed concept solutions are NE and correlated equilibrium BIB002 . Likewise, the decisions of the players are distributed and autonomous which can lead to self-organizing and optimization. In addition, the converged attributes can be analyzed by applying theories of Markovian process and stochastic approximation - BIB001 . It is evident that the work presented in - and BIB017 - BIB001 clearly emphasizes on the importance of decision making in CIoT for optimized service provisioning. Contributions of This Paper: This paper provides a comprehensive survey on decision theoretic models for CIoT. The major contributions of this paper are as follow: • We provide an insight overview of decision theoretic schemes for CIoT. • We propose a novel operational framework for large scale CIoT based on real world example. • We provide detailed overview of game theoretic solution for cognitive decision making in IoT. • We also propose the solution for intelligent decision making in IoT by selecting multiple decision or hybrid decision theoretic solution. • We highlights the issues, challenges and future direction of decision theoretic solution in CIoT. The organization of the paper is shown in Fig.1 and is described afterwards. The proposed cognitive framework with real world scenario is discussed in Section II. We propose a cognitive framework in order to facilitate the discussion on the important open challenges on the cognitive IoT and decision theoretic models. In Section III, we present the fundamental game theoretic solutions for CIoT. Moreover, the appropriate decision theoretic models for intelligent decision making are also discussed in detail in this section. The comparative analysis of decision theoretic solutions in terms of information selection, convergence speed and cost are highlighted in Section IV. In Section V, we have listed possible open issues and future directions for CIoT along with concluding remarks. Moreover, we also propose the solution of intelligent decision making in IoT by selecting multiple or hybrid decision theoretic solution for intelligent decision making in CIoT. Finally, Section VI concludes the entire paper.
|
A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> A. MOTIVATION AND REAL WORLD SCENARIO <s> Opportunistic spectrum access (OSA) has been regarded as the most promising approach to solve the paradox between spectrum scarcity and waste. Intelligent decision making is key to OSA and differentiates it from previous wireless technologies. In this article, a survey of decision-theoretic solutions for channel selection and access strategies for OSA system is presented. We analyze the challenges facing OSA systems globally, which mainly include interactions among multiple users, dynamic spectrum opportunity, tradeoff between sequential sensing cost and expected reward, and tradeoff between exploitation and exploration in the absence of prior statistical information. We provide comprehensive review and comparison of each kind of existing decision-theoretic solution, i.e., game models, Markovian decision process, optimal stopping problem and multi-armed bandit problem. We analyze their strengths and limitations and outline further research for both technical contents and methodologies. In particular, these solutions are critically analyzed in terms of information, cost and convergence speed, which are key concerns for practical implementation. Moreover, it is noted that each kind of existing decision-theoretic solution mainly addresses one aspect of the challenges, which implies that two or more kinds of decision-theoretic solutions should be incorporated to address more challenges simultaneously. <s> BIB001 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> A. MOTIVATION AND REAL WORLD SCENARIO <s> Wireless sensor networks (WSNs) consist of autonomous and resource-limited devices. The devices cooperate to monitor one or more physical phenomena within an area of interest. WSNs operate as stochastic systems because of randomness in the monitored environments. For long service time and low maintenance cost, WSNs require adaptive and robust methods to address data exchange, topology formulation, resource and power optimization, sensing coverage and object detection, and security challenges. In these problems, sensor nodes are used to make optimized decisions from a set of accessible strategies to achieve design goals. This survey reviews numerous applications of the Markov decision process (MDP) framework, a powerful decision-making tool to develop adaptive algorithms and protocols for WSNs. Furthermore, various solution methods are discussed and compared to serve as a guide for using MDPs in WSNs. <s> BIB002 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> A. MOTIVATION AND REAL WORLD SCENARIO <s> In the present scenario, performance evaluation of employees in industries is done manually, in which there are ample chances of biases. It is observed that manual employee evaluation systems can be efficiently eliminated by using ubiquitous sensing capabilities of Internet of things (IoT) devices to monitor industrial employees. However, none of the authors have used IoT data for automating performance evaluation systems of employees. Hence, this paper proposes a game theoretic approach for an IoT-based employee performance evaluation in industry. The system infers useful results about the performance of employees by mining data collected by the sensory nodes using the MapReduce model. The information hence obtained is then used to draw automated decisions for employees using game theory. The system is analyzed both experimentally and mathematically. The experimental evaluation compares the proposed system with other techniques of data mining and decision making. The results depict that the proposed system evaluates the performance of employees efficiently and shows a performance improvement over other techniques. The mathematical evaluation shows that correct evaluation of employees by the system effectively motivates employees in favor of the industry. Thus, the proposed system effectively and efficiently automates the employee evaluation system and decision-making process in the industry. <s> BIB003
|
The current applications on a large-scale CIoT are still inefficient to perform effective and efficient communication and intelligent decision making. The previous literature clearly showed that the IoT is in development phase and research community is mainly focusing on its fundamental architecture, applications, future technologies (clod computing and semantic technologies), networks and communication, data management, security and privacy etc. to name a few - , BIB003 - BIB002 but still the IoT framework is not smart enough to learn, think, and recognize cyber, physical and social worlds by itself. The technological advancements still do not equip the objects with brain to become smart objects that can be reused for intelligent decision making. The current applications in CIoT are still ambiguous and inefficient for effective interoperability and intelligent decision making among heterogeneous objects. To solve these issues, multi-agent decision theoretic models are proposed to cater decision layer in a federated manner. In this article, we focus specifically on the methodology to analyze, learn, design and evaluate multi-agent decision-theoretic solution selection for intelligent decision making among heterogeneous objects and channel selection in CIoT. To the best of our knowledge, there is not a single comprehensive paper that implements hybrid solution of game theory, markovian decision process, multi arm bandit and optimal stopping for decision making in IoT. Moreover, the literature also showed that research community is not focusing on this area of research for IoT. The previous studies clearly indicate that the work done in IoT and CRN for channel selection is either based on game theory or markovian decision process or optimal stopping problem or multi arm bandit problem but none of the authors has used all these techniques together for appropriate channel and object selection (reuse) for intelligent decision making in CRN. In BIB001 , Xu et al. has presented a survey paper in which he proposed decision theoretic solutions for channel selection in CRN which are not suitable for IoT scenario. Therefore, in this paper we suggest the decision theoretic solution for IoT based on the concept of CRN. Moreover, we also propose the novel framework for IoT which includes a new layer of decision making in the 3 layered hierarchical architecture. The addition of this layer in our proposed architecture equipped the old architecture with brain and the decision maker will select the appropriate object and channel for intelligent decision making. This motivates us to propose a framework capable of autonomous cognitive decision making autonomously for a wide range of applications ranging from smart homes to smart cities. Let us consider a smart city scenario in which an elderly person Bob has opted for medical assistance from the medical center. In this scenario, he is equipped with the wearable smart device capable of monitoring the patient's health like body temperature, heartbeat, blood pressure etc. to name a few and sending this information to local intelligent decision maker in his smart home as shown in Fig. 2 . This local CIOT decision maker system regularly obtains the health status of the patient through the connected device. The local system is equipped with four layered architecture which is capable of decision making at local level. In this framework, the VO and their corresponding CVO are created which represent all the functionalities of RWO as discussed above. For instance, the local decision maker obtains the regular health status update of patient and creates the corresponding VO and CVO accordingly in their corresponding databases. The local decision maker at Bob's house is connected to the medical center global decision maker server which is capable of monitoring the health of patient at all the time and informing the relevant doctor about his health status by selecting the appropriate CVO from object repository. Moreover, this intelligent framework also contains a repository for objects policies which includes all the information related to these VOs and CVOs. This repository facilitates the decision maker for selecting and re-utilizing the most appropriate existing object for intelligent decision making in future. Therefore, in this smart CIoT system the decision maker not only informs the doctor about his patient's health status but also alerts the staff at hospital in case of any emergency by selecting the appropriate CVO from its repository. Moreover, the other health saving support systems like ambulance service, paramedic staff etc. to name a few are also informed about the patient's current health status by the intelligent decision maker using CVO. In short, this intelligent CIoT system is connected to all the VOLUME 6, 2018 FIGURE 2. Smart city scenario using CIoT framework for intelligent decision making. emergency service provider systems to save the life of any patient in case of emergency.
|
A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> B. NOVEL FRAMEWORK FOR CIoT <s> The modern theory of evolutionary dynamics is founded upon the remarkable insights of R. A. Fisher and Sewall Wright and set forth in the loci classici The Genetical Theory of Natural Selection (1930) and ‘Evolution in Mendelian Populations’ (1931). By the time of the publication of Wright’s paper in 1931 all of the theory of population genetics, as it is presently understood, was established. It is a sign of the extraordinary power of these early formulations, that nothing of equal significance has been added to the theory of population genetics in the thirty years that have passed since that time. Yet we cannot take this period to mean that we now have an adequate theory of evolutionary dynamics. On the contrary, the theory of population genetics, as complete as it may be in itself, fails to deal with many problems of primary importance for an understanding of evolution. <s> BIB001 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> B. NOVEL FRAMEWORK FOR CIoT <s> Collaborative spectrum sensing (CSS) between secondary users (SUs) in cognitive networks exhibits an inherent tradeoff between minimizing the probability of missing the detection of the primary user (PU) and maintaining a reasonable false alarm probability (e.g., for maintaining good spectrum utilization). In this paper, we study the impact of this tradeoff on the network structure and the cooperative incentives of the SUs that seek to cooperate to improve their detection performance. We model the CSS problem as a nontransferable coalitional game, and we propose distributed algorithms for coalition formation (CF). First, we construct a distributed CF algorithm that allows the SUs to self-organize into disjoint coalitions while accounting for the CSS tradeoff. Then, the CF algorithm is complemented with a coalitional voting game to enable distributed CF with detection probability (CF-PD) guarantees when required by the PU. The CF-PD algorithm allows the SUs to form minimal winning coalitions (MWCs), i.e., coalitions that achieve the target detection probability with minimal costs. For both algorithms, we study and prove various properties pertaining to network structure, adaptation to mobility, and stability. Simulation results show that CF reduces the average probability of miss per SU up to 88.45%, relative to the noncooperative case, while maintaining a desired false alarm. For CF-PD, the results show that up to 87.25% of the SUs achieve the required detection probability through MWCs. <s> BIB002 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> B. NOVEL FRAMEWORK FOR CIoT <s> The handling of the number of objects that will be part of the Internet of Things (IoT) and the various networking technologies used for their interconnection requires suitable architecture and technological foundations. Despite significant work on architectures and test facilities for the IoT, there is still a lack of management functionality to overcome the technological heterogeneity and complexity of the underlying networks and IoT infrastructure, so as to enhance context/situational-awareness, reliability, and energy-efficiency of IoT applications. This article presents a cognitive management framework for the IoT aiming to address these issues, comprising three levels of functionality: virtual objects (VOs), composite VOs (CVOs), and service levels. Cognitive entities at all levels provide the means for self-management (configuration, optimization, and healing) and learning. Three fundamental processes of this framework are presented: dynamic CVO creation, knowledge-based CVO instantiation, and CVO self-healing. A first prototype implementation of this framework and corresponding derived results are presented. <s> BIB003 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> B. NOVEL FRAMEWORK FOR CIoT <s> The Internet of Things (IoT) is expected to substantially support sustainable development of future smart cities. This article identifies the main issues that may prevent IoT from playing this crucial role, such as the heterogeneity among connected objects and the unreliable nature of associated services. To solve these issues, a cognitive management framework for IoT is proposed, in which dynamically changing real-world objects are represented in a virtualized environment, and where cognition and proximity are used to select the most relevant objects for the purpose of an application in an intelligent and autonomic way. Part of the framework is instantiated in terms of building blocks and demonstrated through a smart city scenario that horizontally spans several application domains. This preliminary proof of concept reveals the high potential that self-reconfigurable IoT can achieve in the context of smart cities. <s> BIB004
|
The Literature review clearly shows that IoT is limited to interoperability, architecture, communication, security etc. to name a few and does not address the issues of autonomous decision making for smart city environment , . Moreover, the current framework of IoT which consists of three layers is still inefficient for effective communication and intelligent decision making BIB003 , , . In this paper, a decision theoretic framework is proposed to cater the decision layer as shown in Fig.3 . This novel framework is hierarchical in nature and composed of four layers with unique functionalities. The first level is known as VO level, second level is decision making level, third level is CVO level and upper most level is the service/stakeholder level. The intelligent components at each level are capable of providing self-management (configuration, healing, optimization, and protection) and learning. In other words, the entities at each level are capable of perceiving and reasoning on context for conducting associated knowledge based decision making (through associated optimization algorithms and machine learning), and autonomously adapting their behaviour and the configuration according to the derived situation. The aim of this revolutionary management framework includes: • The implementation of intelligent learning methods that can enhance context awareness by providing the means to exploit more objects. • The implementation of hybrid intelligent decision making algorithms that can improve the energy-efficiency by selecting the most suitable object among heterogeneous objects. • The efficient management of heterogeneous resources of large scale system in the form of intelligent representation. • The implementation of hybrid decision theoretic solutions for channels selection that can improve the spectrum efficiency of large CIoT system. • The implementation of intelligent algorithms that enables the reliable service/application provision by rendering the high reliability through the ability to use heterogeneous objects in complementary manner. To achieve these goals, we have added an additional layer of decision making in the old architecture to support the decision making process. The addition of this layer has equipped the framework maker with brain and enables the decision maker to coordinate with other supporting databases i.e. CVO template repository, service repository, VO repository, policy repository etc. to name a few for intelligent decision making of channel selection and object reuse in CIoT. This layer serves as a bridge between the service level and object level. The CIoT paradigm is hierarchical in nature and requires the game based algorithms for objects interlinking and management. Our proposed multi-agent framework is capable of analyzing, optimal learning and evaluating the decisiontheoretic object and channel selection for intelligent decision making in CIoT. The heart of the proposed CIoT framework is intelligent decision maker which is equipped with the intelligent algorithms of game theory, markovian decision process, optimal stopping and multi-arm bandit. Our proposed framework provides an intelligent solution for decision making by combining hybrid or multiple decision-theoretic models for object and channel selection. The role of decision maker is most vital in the framework as it instructs other layers and their components for future actions related to object creation, updation, policies, etc. to name a few. For instance, the decision maker instructs the CVO management unit for the selection of existing object from CVO template repository or it can instruct the CVO container for the creation of new CVO object and updation of its policy. Moreover, the proposed framework also facilitates the decision maker for providing an adequate solution for appropriate object selection, existing object reuse and creation of smart objects by merging the existing objects into intelligent representation to minimize time for object creation, decision making, appropriate channel selection and efficient utilization of resources. This layer is also responsible for the management of policies and service provision to other layers. Moreover, this repository for objects policies includes all the information related to these VOs and CVOs. This layer also contains the database of learning by understanding where can the learning algorithms like Q learning , stochastic learning BIB001 and reinforcement learning BIB002 be implemented for intelligent object selection and reuse. Moreover, it also facilitates the decision maker to select the most feasible channel for communication. This database contains complete information about the semantics derivation, ontology, context of environment and knowledge discovery of objects. This component makes every object in CIoT intelligent by analysing the data and discovering some valuable patterns as knowledge. This repository also facilitates the decision maker for selecting and re-utilizing the most appropriate existing object and channel selection for intelligent decision making in future. The first layer of our proposed architecture is VO layer which serves as a bridge between the real world objects (RWO) or IoT enabled devices. The IoT devices have sensors which collect information from the user/environment while actuators enable the devices to transfer this data to CIoT framework through gateway controller. The VO management unit contains all the functions related to VO including their creation, updation, deletion, etc. to name a few. Basically, this level provides a high-level interface to devices/objects by abstracting the complexity of the underlying CIoT infrastructure. The VO's contains all the information about discovery, exploitation and detailed description of objects/devices . The information about every object is stored in the VO registry which includes the type of VO, the object that is connected to, all the functionalities and features that each VO can provide and other related information of each VO. Moreover, the VO template repository facilitates the VO registry for the creation of VO with predefined properties. This database also facilitates the policy repository for updation of new functionalities of VOs. The CVO level is responsible for combining many VOs to establish CVO, which provides services to end-users as well as higher hierarchy applications BIB004 . At this level, the CVO management unit is responsible for handling all the databases/registries and performs coordinated and intelligent decision as per user requirement. All the information regarding the CVOs is found in CVO registries which includes detailed features of CVOs, VOs which are included in this CVO, all the related data related to CVO creation and its related features. The component of request and situation matching is responsible for building knowledge and experience related to all the CVOs which are created previously. This knowledge is then utilized by the intelligent decision maker to make intelligent decision robust and efficient. Basically, this component works with CVO management unit to search for the existing CVOs that can fulfil the requested service requirement. This enables the reuse of existing CVOs, increase efficiency in terms of time, and resource saving. More specifically, when the decision maker requests for a CVO then these components explore the CVO registry and try to find that particular CVO based on matching pattern. If this CVO exits then the decision maker reuses it. Otherwise it triggers the optimal composition of VO, which will dynamically create a CVO according to the requested functions and policies. The service level enables the users to define the features of a required service/application through compatible interfaces and also provides all the functionalities for the fulfillment of requested services requirements independently. These services are composed of various features including performance, energy efficiency, etc. to name a few. Moreover, support for different parameters in terms of services like time, location, temperature etc. to name a few. must be supported by this level. This layer is composed of service request analysis and acquisition and other databases to provide these services. Initially, the service request from the user is translated by natural language processing component so that it can be understood by the remaining components of this layer. Basically, this component translate and infers the functions and policies which are requested by the user from the interface. The request analysis and acquisition component evaluates the conditions under which the services were requested and provides the corresponding parameters i.e. time, location etc. to name a few with the help of all the attached databases i.e. situation modeling, service template repository and real world knowledge model. The connected databases help in obtaining and learning information based on user preferences. This layer forwards the request of user to CVO level for the dynamic creation CVO from different VOs. For instance, if Bob is having a heart attack, then the local decision maker at Bob's home gets the request from smart watch for assistance of medical doctor. Initially, the decision maker first searches for the appropriate existing CVO in the CVO registry. If it is found then it automatically reuses the appropriate CVO to provide medical assistance and triggers alarm at home as well as informs the global decision maker at medical center for assistance. Otherwise it combines many VOs and creates the CVO which will ultimately trigger the emergency situation condition at home as well at the hospital assistance server. The intelligent decision maker at this stage uses the game theoretic solution for selecting the appropriate object and efficient channel for transferring all the data related to Bob's current condition to the hospital assistance center. The efficient utilization of hybrid game theoretic solution not only saves time but also guarantees the secure transmission of information to both hospital and emergency service providers. The same intelligent framework and assistance system at hospital receives the information of emergency situation and makes decision accordingly. The global decision maker at hospital sends the ambulance by triggering the corresponding CVO from its respective databases. In this smart city environment, the intelligent traffic monitoring system will assist the ambulance to select the most appropriate route by using the coordination among several objects for best route to Bob's house. This coordination among the objects and selection of most suitable channel for wireless communication between wireless devices in smart home, smart hospital and smart traffic monitoring system can be achieved by using the proposed novel decision making layer that utilizes existing decision-theoretic solutions including game theory models, Markovian decision process (MDP), optimal stopping problem (OSP) and multi-arm bandit problem (MAB) for appropriate object selection and intelligent decision making in smart city environment. The intelligent framework also enables several decision makers to perform mutual coordination and management among several players in large scale CIoT by utilizing the functionalities of local interaction game and coordination ontology. In our scenario, the patient's life can be saved by using game theoretic model for global optimization using local interaction game among objects. This paper specifically focuses on decision theoretic solutions, cooperative decision making and global optimization using local interactive game model on large scale CIoT which is discussed in detail in later sections.
|
A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> III. DECISION MAKING FOR CIoT <s> A multi-person discrete game where the payoff after each play is stochastic is considered. The distribution of the random payoff is unknown to the players and further none of the players know the strategies or the actual moves of other players. A learning algorithm for the game based on a decentralized team of learning automata is presented. It is proved that all stable stationary points of the algorithm are Nash equilibria for the game. Two special cases of the game are also discussed, namely, game with common payoff and the relaxation labelling problem. The former has applications such as pattern recognition and the latter is a problem widely studied in computer vision. For the two special cases it is shown that the algorithm always converges to a desirable solution. > <s> BIB001 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> III. DECISION MAKING FOR CIoT <s> In this paper, we model the various users in a wireless network (e.g., cognitive radio network) as a collection of selfish autonomous agents that strategically interact to acquire dynamically available spectrum opportunities. Our main focus is on developing solutions for wireless users to successfully compete with each other for the limited and time-varying spectrum opportunities, given experienced dynamics in the wireless network. To analyze the interactions among users given the environment disturbance, we propose a stochastic game framework for modeling how the competition among users for spectrum opportunities evolves over time. At each stage of the stochastic game, a central spectrum moderator (CSM) auctions the available resources, and the users strategically bid for the required resources. The joint bid actions affect the resource allocation and, hence, the rewards and future strategies of all users. Based on the observed resource allocations and corresponding rewards, we propose a best-response learning algorithm that can be deployed by wireless users to improve their bidding policy at each stage. The simulation results show that by deploying the proposed best-response learning algorithm, the wireless users can significantly improve their own bidding strategies and, hence, their performance in terms of both the application quality and the incurred cost for the used resources. <s> BIB002 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> III. DECISION MAKING FOR CIoT <s> In order to cater for the overwhelming growth in bandwidth demand from mobile Internet users operators have started to deploy different, overlapping radio access network technologies. One important challenge in such a heterogeneous wireless environment is to enable network selection mechanisms in order to keep the mobile users Always Best Connected (ABC) anywhere and anytime. Game theory techniques have been receiving growing attention in recent years as they can be adopted in order to model and understand competitive and cooperative scenarios between rational decision makers. This paper presents an overview of the network selection decision problem and challenges, a comprehensive classification of related game theoretic approaches and a discussion on the application of game theory to the network selection problem faced by the next generation of 4G wireless networks. <s> BIB003 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> III. DECISION MAKING FOR CIoT <s> We investigate the problem of achieving global optimization for distributed channel selections in cognitive radio networks (CRNs), using game theoretic solutions. To cope with the lack of centralized control and local influences, we propose two special cases of local interaction game to study this problem. The first is local altruistic game, in which each user considers the payoffs of itself as well as its neighbors rather than considering itself only. The second is local congestion game, in which each user minimizes the number of competing neighbors. It is shown that with the proposed games, global optimization is achieved with local information. Specifically, the local altruistic game maximizes the network throughput and the local congestion game minimizes the network collision level. Also, the concurrent spatial adaptive play (C-SAP), which is an extension of the existing spatial adaptive play (SAP), is proposed to achieve the global optimum both autonomously as well as rapidly. <s> BIB004 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> III. DECISION MAKING FOR CIoT <s> Opportunistic spectrum access (OSA) has been regarded as the most promising approach to solve the paradox between spectrum scarcity and waste. Intelligent decision making is key to OSA and differentiates it from previous wireless technologies. In this article, a survey of decision-theoretic solutions for channel selection and access strategies for OSA system is presented. We analyze the challenges facing OSA systems globally, which mainly include interactions among multiple users, dynamic spectrum opportunity, tradeoff between sequential sensing cost and expected reward, and tradeoff between exploitation and exploration in the absence of prior statistical information. We provide comprehensive review and comparison of each kind of existing decision-theoretic solution, i.e., game models, Markovian decision process, optimal stopping problem and multi-armed bandit problem. We analyze their strengths and limitations and outline further research for both technical contents and methodologies. In particular, these solutions are critically analyzed in terms of information, cost and convergence speed, which are key concerns for practical implementation. Moreover, it is noted that each kind of existing decision-theoretic solution mainly addresses one aspect of the challenges, which implies that two or more kinds of decision-theoretic solutions should be incorporated to address more challenges simultaneously. <s> BIB005 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> III. DECISION MAKING FOR CIoT <s> In this article, we investigate self-organizing optimization for cognitive small cells (CSCs), which have the ability to sense the environment, learn from historical information, make intelligent decisions, and adjust their operational parameters. By exploring the inherent features, some fundamental challenges for self-organizing optimization in CSCs are presented and discussed. Specifically, the dense and random deployment of CSCs brings about some new challenges in terms of scalability and adaptation; furthermore, the uncertain, dynamic, and incomplete information constraints also impose some new challenges in terms of convergence and robustness. For providing better service to users and improving resource utilization, four requirements for self-organizing optimization in CSCs are presented and discussed. Following the attractive fact that the decisions in game-theoretic models are exactly coincident with those in self-organizing optimization (i.e., distributed and autonomous), we establish a framework of gametheoretic solutions for self-organizing optimization in CSCs and propose some featured game models. Specifically, their basic models are presented, some examples are discussed, and future research directions are given. <s> BIB006
|
Decision making for CIoT involves continuous inference, process selection and interpretation of the meaning of acquired data accordingly. As already discussed, in CIoT applications, decisions might need to be taken e.g., informing the medical center, selecting best route for ambulance, acquiring channels for wireless transmission or even joint execution of task among multiple services. In this paper, we propose the solution for intelligent decision making which utilizes the learning capability of CRN for appropriate decision making based on historical information of smart objects in CIoT. In this paper, we analyse the suitability of four basic decisiontheoretic models for CIoT applications. A. GAME THEORY Game theory BIB003 is a mathematical model for analyzing mutual interactions in multi-user decision systems. This model comprises of a fixed number of players, action commands and a utility function that maps player's actions into a real value. The models are defined as cooperative games and non-cooperative games. In cooperative game model, the players attempt towards sensible decision for maximizing utility function. Basically, the players (objects) are grouped together, and co-operate according to the agreed payoff portion. In a non-cooperative game, the solution concepts of NE and correlated equilibrium (CE) are deployed BIB001 . Moreover, the decisions of the players are distributed and autonomous which lead to self-organizing optimization BIB003 . To make mutual interactions among multiple players, we have to formulate a game for object selection. The game model is represented by G = {N , A n , u n }, where player or object set is represented by N = {1, . . . , N }, strategy set of objects is denoted by A n and the utility function of n players is denoted by u n . The pure strategy is selected by the game player when he selects a single action from his action set. Let us suppose that the strategy of object n is denoted by a n ∈ A n and a = {a 1 , . . . , a N } represents the strategy profile of all the objects. Moreover, u n (a n , a −n ) represents the utility function while σ n represents the mixed strategy of object. The notation σ n (a n ) represents the probability that object selects strategy a n . Therefore, we can express the utility function in mixed strategy profile σ = (σ n , σ −n ) as The NE BIB005 , BIB003 , is the best solution for non-cooperative games in which each user maximizes its individual utility function by deviating unilaterally. In this scenario, it cannot deviate and maximize its utility function as it is supposed to know the equilibrium strategy of other players. Similarly, CE BIB003 also provides a perfect solution for better coordination and flexible utility function design by performing correlation among objects based on observation. CE is better than NE because it has convex sets which have the property to address fairness between the players, as compared to isolated points of NE. The non-cooperated game models are proposed in BIB005 , BIB006 , and BIB003 for NE and CE solutions. The following aspects must be taken into consideration: • Implementation of utility function must be designed cautiously so as to avoid the users on wasting the resources BIB002 . This is important as game theory addresses the interaction among multiple decisionmakers with no guarantee on the performance. • Information updating and learning, specifically by achieving constant update of information rules for the VOLUME 6, 2018 users. New learning procedures are required for a stable outcome. In this regard, our framework includes a component of learning by understanding of better practical solution about the unknown environment. The consideration here is to implement the reinforced learning algorithms or stochastic learning automata BIB004 , BIB006 to practically converge to NE and CE of game for desirable solution in rapidly changing CIoT environment.
|
A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> 1) APPROPRIATE GAME MODEL FOR CIoT <s> We study a simple game-theoretic model for the spread of an innovation in a network. The diffiusion of the innovation is modeled as the dynamics of a coordination game in which the adoption of a common strategy between players has a higher payoff. Classical results in game theory provide a simple condition for the innovation to spread through the network. The present paper characterizes the rate of convergence as a function of graph structure. In particular, we derive a dichotomy between well-connected (e.g. random) graphs that show slow convergence and poorly connected, low dimensional graphs that show fast convergence. <s> BIB001 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> 1) APPROPRIATE GAME MODEL FOR CIoT <s> In this paper, we present and analyze the properties of a new class of games - the spatial congestion game (SCG), which is a generalization of the classical congestion game (CG). In a classical congestion game, multiple users share the same set of resources and a user's payoff for using any resource is a function of the total number of users sharing it. As a potential game, this game enjoys some very appealing properties, including the existence of a pure strategy Nash equilibrium (NE) and that every improvement path is finite and leads to such a NE (also called the finite improvement property or FIP). While it's tempting to use this model to study spectrum sharing, it does not capture the spatial reuse feature of wireless communication, where resources (interpreted as channels) may be reused without increasing congestion provided that users are located far away from each other. This motivates us to study an extended form of the congestion game where a user's payoff for using a resource is a function of the number of its interfering users sharing it. This naturally results in a spatial congestion game (SCG), where users are placed over a network (or a conflict graph). We study fundamental properties of a spatial congestion game; in particular, we seek to answer under what conditions this game possesses the finite improvement property or a Nash equilibrium. We also discuss the implications of these results when applied to wireless spectrum sharing. <s> BIB002 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> 1) APPROPRIATE GAME MODEL FOR CIoT <s> The issue of distributed channel selection in opportunistic spectrum access is investigated in this paper. We consider a practical scenario where the channel availability statistics and the number of competing secondary users are unknown to the secondary users. Furthermore, there is no information exchange between secondary users. We formulate the problem of distributed channel selection as a static non-cooperative game. Since there is no prior information about the licensed channels and there is no information exchange between secondary users, existing approaches are unfeasible in our proposed game model. We then propose a learning automata based distributed channel selection algorithm, which does not explicitly learn the channel availability statistics and the number of competing secondary users but learns proper actions for secondary users, to solve the proposed channel selection game. The convergence towards Nash equilibrium with respect to the proposed algorithm also has been investigated. <s> BIB003 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> 1) APPROPRIATE GAME MODEL FOR CIoT <s> We investigate the problem of achieving global optimization for distributed channel selections in cognitive radio networks (CRNs), using game theoretic solutions. To cope with the lack of centralized control and local influences, we propose two special cases of local interaction game to study this problem. The first is local altruistic game, in which each user considers the payoffs of itself as well as its neighbors rather than considering itself only. The second is local congestion game, in which each user minimizes the number of competing neighbors. It is shown that with the proposed games, global optimization is achieved with local information. Specifically, the local altruistic game maximizes the network throughput and the local congestion game minimizes the network collision level. Also, the concurrent spatial adaptive play (C-SAP), which is an extension of the existing spatial adaptive play (SAP), is proposed to achieve the global optimum both autonomously as well as rapidly. <s> BIB004
|
This section provides insight knowledge of basic game models which are useful to analyse and model the user interaction with CIoT system. The paper also highlights the behaviour update rule which is most important rule for the implementation of game model. Moreover, all of them utilize the parallel sensing strategies for game based solutions. The current models suitable for CIoT include: In CIoT, there are millions of objects which are distributed across the network and we need the repeated game which is played in finite or infinite horizon. In this intelligent game, the game players update their strategy according to their previous action-payoff. These games are best for modeling and analyzing the objects in distributed environment because the decision makers observe the environment by constantly accessing the spectrum. In BIB003 , Xu et al. presented the solution for distributed channel selection in opportunistic spectrum access (OSA) system as repeated game. In BIB001 , the authors also presented channel selection algorithm which is based on reinforcement learning for multi-user and multichannel distributed system. They performed the simulation of reinforcement learning based algorithms in static environment and showed that these algorithms converge to N.E of game. The reinforcement learning technique is implemented in time-varying spectrum environment for distributed channel selection in OSA system BIB002 . In BIB004 , Xu et al. proposed an intelligent learning algorithm which is known as stochastic learning automata that converge to Nash Equilibrium of game.
|
A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> b: Graphical game <s> The modern theory of evolutionary dynamics is founded upon the remarkable insights of R. A. Fisher and Sewall Wright and set forth in the loci classici The Genetical Theory of Natural Selection (1930) and ‘Evolution in Mendelian Populations’ (1931). By the time of the publication of Wright’s paper in 1931 all of the theory of population genetics, as it is presently understood, was established. It is a sign of the extraordinary power of these early formulations, that nothing of equal significance has been added to the theory of population genetics in the thirty years that have passed since that time. Yet we cannot take this period to mean that we now have an adequate theory of evolutionary dynamics. On the contrary, the theory of population genetics, as complete as it may be in itself, fails to deal with many problems of primary importance for an understanding of evolution. <s> BIB001 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> b: Graphical game <s> We present a view of cooperative control using the language of learning in games. We review the game-theoretic concepts of potential and weakly acyclic games, and demonstrate how several cooperative control problems, such as consensus and dynamic sensor coverage, can be formulated in these settings. Motivated by this connection, we build upon game-theoretic concepts to better accommodate a broader class of cooperative control problems. In particular, we extend existing learning algorithms to accommodate restricted action sets caused by the limitations of agent capabilities and group based decision making. Furthermore, we also introduce a new class of games called sometimes weakly acyclic games for time-varying objective functions and action sets, and provide distributed algorithms for convergence to an equilibrium. <s> BIB002 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> b: Graphical game <s> Competitive spectrum access is studied for cognitive radio networks. Based on the assumption of rational secondary users, the spectrum access is modeled as a graphical game, in which the payoff of a secondary user is dependent on only other secondary users that can cause significant interference. The Nash equilibrium in the graphical game is computed by minimizing the sum of regrets. To alleviate the local knowledge of payoffs (each secondary user knows only its own payoff for different channels), a subgradient based iterative algorithm is applied by exchanging information across different secondary users. When information exchange is not available, learning for spectrum access is carried out by employing stochastic approximation (more specifically, the Kiefer-Wolfowitz algorithm). The convergence of both situations is demonstrated by numerical simulations. <s> BIB003 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> b: Graphical game <s> We consider a noncooperative interaction among a large population of mobiles that interfere with each other through many local interactions. The first objective of this paper is to extend the evolutionary game framework to allow an arbitrary number of mobiles that are involved in a local interaction. We allow for interactions between mobiles that are not necessarily reciprocal. We study 1) multiple-access control in a slotted Aloha-based wireless network and 2) power control in wideband code-division multiple-access wireless networks. We define and characterize the equilibrium (called evolutionarily stable strategy) for these games and study the influence of wireless channels and pricing on the evolution of dynamics and the equilibrium. <s> BIB004 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> b: Graphical game <s> Secondary users sharing primary users' spectrum is modeled as a graphical game. Users located in random graphs and a regular lattice are considered. Secondary users are assumed to differentiate the ``quality" of the primary spectrum while interacting within their local neighborhood to minimize interference and congestion. The learning algorithm is also shown to be effective in punishing malicious users that violate spectrum etiquettes. An equivalence between spectrum sharing neighborhood interaction and the spin-glass model in statistical physics is established. A distributed exponential learning algorithm is used to arrive at an evolutionary stable solution to the game. Some theoretical properties of the system are studied and simulation results are presented to illustrate price of anarchy, convergence of the learning algorithm and asymptotic invariance of the system performance with respect to spectrum quality. <s> BIB005 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> b: Graphical game <s> This letter investigates the problem of distributed channel selection in cognitive radio ad hoc networks (CRAHNs) with heterogeneous spectrum opportunities. Firstly, we formulate this problem as a local congestion game, which is proved to be an exact potential game. Then, we propose a spatial best response dynamic (SBRD) to rapidly achieve Nash equilibrium via local information exchange. Moreover, the potential function of the game reflects the network collision level and can be used to achieve higher throughput. <s> BIB006 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> b: Graphical game <s> We investigate the problem of achieving global optimization for distributed channel selections in cognitive radio networks (CRNs), using game theoretic solutions. To cope with the lack of centralized control and local influences, we propose two special cases of local interaction game to study this problem. The first is local altruistic game, in which each user considers the payoffs of itself as well as its neighbors rather than considering itself only. The second is local congestion game, in which each user minimizes the number of competing neighbors. It is shown that with the proposed games, global optimization is achieved with local information. Specifically, the local altruistic game maximizes the network throughput and the local congestion game minimizes the network collision level. Also, the concurrent spatial adaptive play (C-SAP), which is an extension of the existing spatial adaptive play (SAP), is proposed to achieve the global optimum both autonomously as well as rapidly. <s> BIB007 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> b: Graphical game <s> In this paper, we introduce and analyze the properties of a class of games, the atomic congestion games on graphs (ACGGs), which is a generalization of the classical congestion games. In particular, an ACGG captures the spatial information that is often ignored in a classical congestion game. This is useful in many networking problems, e.g., wireless networks where interference among the users heavily depends on the spatial information. In an ACGG, a player's payoff for using a resource is a function of the number of players who interact with it and use the same resource. Such spatial information can be captured by a graph. We study fundamental properties of the ACGGs: under what conditions these games possess a pure strategy Nash equilibrium (PNE), or the finite improvement property (FIP), which is sufficient for the existence of a PNE. We show that a PNE may not exist in general, but that it does exist in many important special cases including tree, loop, or regular bipartite networks. The FIP holds for important special cases including systems with two resources or identical payoff functions for each resource. Finally, we present two wireless network applications of ACGGs: power control and channel contention under IEEE 802.11. <s> BIB008 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> b: Graphical game <s> Wireless sensor networks (WSNs) consist of autonomous and resource-limited devices. The devices cooperate to monitor one or more physical phenomena within an area of interest. WSNs operate as stochastic systems because of randomness in the monitored environments. For long service time and low maintenance cost, WSNs require adaptive and robust methods to address data exchange, topology formulation, resource and power optimization, sensing coverage and object detection, and security challenges. In these problems, sensor nodes are used to make optimized decisions from a set of accessible strategies to achieve design goals. This survey reviews numerous applications of the Markov decision process (MDP) framework, a powerful decision-making tool to develop adaptive algorithms and protocols for WSNs. Furthermore, various solution methods are discussed and compared to serve as a guide for using MDPs in WSNs. <s> BIB009
|
The most efficient and effective game model suitable for large scale distributed radio network is graphical game or local interaction game BIB003 or spatial game BIB005 . In this game, the actions of a game player are only affected by its neighboring players instead of all players of the game. Basically, the transmission by the player has an affect on its neighboring players which are in its transmission range and do not affect the distant players. Therefore, it results into spatialreuse in the cognitive radio system and can be successfully applied in CIoT. In BIB001 , Smith proposed a regret minimization algorithm for free-use OSA system to converge to NE. Xu et al. BIB006 , proposed a share-use OSA system which utilizes the intelligent learning algorithm that converges to evolutionary stable strategy (ESS). In BIB007 , Xu et al. designed a share-use OSA system for minimizing the collision level and maximizing the throughput of the system. The formulated graphical games in BIB007 and BIB009 are called potential games. Marden et al. BIB002 showed that the behavior update is best response which is an average optimal solution. On other hand, in BIB007 Xu et al. formulated the spatial adaptive play BIB008 which is asymptotically optimal with local information exchange. Moreover, literature showed that the graphical games can be formulated as spatial congestion game for opportunistic spectrum access systems. The payoff of the players in congestion game is a function of the number of players which are in contact with it and utilize the same network resources BIB004 . In previous literature, the authors basically tries to investigate the conditions under which a pure NE strategy can be achieved in spatial congestion games.
|
A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> c: Evolutionary game <s> This letter investigates the problem of distributed channel selection in cognitive radio ad hoc networks (CRAHNs) with heterogeneous spectrum opportunities. Firstly, we formulate this problem as a local congestion game, which is proved to be an exact potential game. Then, we propose a spatial best response dynamic (SBRD) to rapidly achieve Nash equilibrium via local information exchange. Moreover, the potential function of the game reflects the network collision level and can be used to achieve higher throughput. <s> BIB001 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> c: Evolutionary game <s> In this paper, we design distributed spectrum access mechanisms with both complete and incomplete network information. We propose an evolutionary spectrum access mechanism with complete network information, and show that the mechanism achieves an equilibrium that is globally evolutionarily stable. With incomplete network information, we propose a distributed learning mechanism, where each user utilizes local observations to estimate the expected throughput and learns to adjust its spectrum access strategy adaptively over time. We show that the learning mechanism converges to the same evolutionary equilibrium on the time average. Numerical results show that the proposed mechanisms achieve up to 35 percent performance improvement over the distributed reinforcement learning mechanism in the literature, and are robust to the perturbations of users' channel selections. <s> BIB002 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> c: Evolutionary game <s> In this article, we investigate self-organizing optimization for cognitive small cells (CSCs), which have the ability to sense the environment, learn from historical information, make intelligent decisions, and adjust their operational parameters. By exploring the inherent features, some fundamental challenges for self-organizing optimization in CSCs are presented and discussed. Specifically, the dense and random deployment of CSCs brings about some new challenges in terms of scalability and adaptation; furthermore, the uncertain, dynamic, and incomplete information constraints also impose some new challenges in terms of convergence and robustness. For providing better service to users and improving resource utilization, four requirements for self-organizing optimization in CSCs are presented and discussed. Following the attractive fact that the decisions in game-theoretic models are exactly coincident with those in self-organizing optimization (i.e., distributed and autonomous), we establish a framework of gametheoretic solutions for self-organizing optimization in CSCs and propose some featured game models. Specifically, their basic models are presented, some examples are discussed, and future research directions are given. <s> BIB003
|
This theory was initially presented by the biologists to calculate the population dynamic BIB001 . The evolutionary game was then formulated based on this theory to form a evolutionary stable strategy (ESS) BIB001 . In BIB001 , Xu et al. present the ESS to calculate robustness and define the utility function over statistics. In this paper, the ESS is categorized by robustness against invaders with the property that the population remains unchanged until ESS is reached. Moreover, this technique is also efficient in perturbations by a small number of players. This exiting game is applied successfully to wireless system and can be applied to CIoT for multiple access of objects, cooperative spectrum sensing and network selection BIB002 . In , Monderer and Shapley propose the channel selection algorithm in shared OSA system as an evolutionary game. The authors have showed that the complete network information having Bernoulli distribution in each slot can converge to an ESS when replicator dynamics are applied to it. Basically, this work provides an adequate solution for interactions among the players and dynamics of channel selection in OSA system. The robust game is presented in BIB003 , in which the authors explained in detail about the dynamic and random deployment of cognitive small cells (CSCs). In these cognitive small cells, the utility function is defined by the expected capacity in all possible active sets of cells. The author proved it by simulation in dynamic environment that the algorithms of distributed leaning automata are successfully applied to these potential game that will converge to NE.
|
A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> d: Coalition game <s> Collaborative spectrum sensing (CSS) between secondary users (SUs) in cognitive networks exhibits an inherent tradeoff between minimizing the probability of missing the detection of the primary user (PU) and maintaining a reasonable false alarm probability (e.g., for maintaining good spectrum utilization). In this paper, we study the impact of this tradeoff on the network structure and the cooperative incentives of the SUs that seek to cooperate to improve their detection performance. We model the CSS problem as a nontransferable coalitional game, and we propose distributed algorithms for coalition formation (CF). First, we construct a distributed CF algorithm that allows the SUs to self-organize into disjoint coalitions while accounting for the CSS tradeoff. Then, the CF algorithm is complemented with a coalitional voting game to enable distributed CF with detection probability (CF-PD) guarantees when required by the PU. The CF-PD algorithm allows the SUs to form minimal winning coalitions (MWCs), i.e., coalitions that achieve the target detection probability with minimal costs. For both algorithms, we study and prove various properties pertaining to network structure, adaptation to mobility, and stability. Simulation results show that CF reduces the average probability of miss per SU up to 88.45%, relative to the noncooperative case, while maintaining a desired false alarm. For CF-PD, the results show that up to 87.25% of the SUs achieve the required detection probability through MWCs. <s> BIB001 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> d: Coalition game <s> This paper investigates price-based resource allocation strategies for two-tier femtocell networks, in which a central macrocell is underlaid with distributed femtocells, all operating over the same frequency band. Assuming that the macrocell base station (MBS) protects itself by pricing the interference from femtocell users, a Stackelberg game is formulated to study the joint utility maximization of the macrocell and femtocells subject to a maximum tolerable interference power constraint at the MBS. Two practical femtocell network models are investigated: sparsely deployed scenario for rural areas and densely deployed scenario for urban areas. For each scenario, two pricing schemes: uniform pricing and non-uniform pricing, are proposed. The Stackelberg equilibriums for the proposed games are characterized, and an effective distributed interference price bargaining algorithm with guaranteed convergence is proposed for the uniform-pricing case. Numerical examples are presented to verify the proposed studies. It is shown that the proposed schemes are effective in resource allocation and macrocell protection for both the uplink and downlink transmissions in spectrum-sharing femtocell networks. <s> BIB002 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> d: Coalition game <s> We investigate the problem of distributed channel selection using a game-theoretic stochastic learning solution in an opportunistic spectrum access (OSA) system where the channel availability statistics and the number of the secondary users are apriori unknown. We formulate the channel selection problem as a game which is proved to be an exact potential game. However, due to the lack of information about other users and the restriction that the spectrum is time-varying with unknown availability statistics, the task of achieving Nash equilibrium (NE) points of the game is challenging. Firstly, we propose a genie-aided algorithm to achieve the NE points under the assumption of perfect environment knowledge. Based on this, we investigate the achievable performance of the game in terms of system throughput and fairness. Then, we propose a stochastic learning automata (SLA) based channel selection algorithm, with which the secondary users learn from their individual action-reward history and adjust their behaviors towards a NE point. The proposed learning algorithm neither requires information exchange, nor needs prior information about the channel availability statistics and the number of secondary users. Simulation results show that the SLA based learning algorithm achieves high system throughput with good fairness. <s> BIB003
|
The coalition game is designed to form a group or cluster of set of players to achieve better coordination and increased payoff in distributed system. This game is also useful in large scale CIoT as it coordinates among the distributed decision makers and objects. In BIB003 , Xu et al. proposed that different countries can form coalition to improve their human potentials while players can also form coalition to improve their sensing performance. The problem of spectrum sensing and access in opportunistic spectrum access for the partitioned network is discussed in BIB001 . From the previous literature review, the benefits of coalition game are highlighted which include: (i) coalition game provides better performance and throughput by sensing different channels and sharing their information to reduce time for sensing. (ii) the interference can be reduced by joint coalition of players for the channel access. (iii) the channel capacity can be improved by joint coalition of the players by distributing their total power over multiple channels. In short, the performance of the system can be improved by using coalition game by coordination and cooperation among the players BIB002 .
|
A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> e: Hierarchical game <s> The femtocell concept is an emerging technology for deploying the next generation of the wireless networks, aiming at indoor coverage enhancement, increasing capacity, and offloading the overlay macrocell traffic. Nevertheless, the detrimental factor in such networks is co-channel interference between macrocells and femtocells, as well as among neighboring femtocells. This in turn can dramatically decrease the overall capacity of the network. In addition, due to their non-coordinated nature, femtocells need to self-organize in a distributed manner not to cause interference on the macrocell, while at the same time managing interference among neighboring femtocells. This paper proposes and analyzes a Reinforcement-Learning (RL) framework where a macrocell network is underlaid with femtocells sharing the same spectrum. A distributed Q-learning algorithm is proposed in which each Femto Base Station/Access Point (FBS/FAP) gradually learns (by interacting with its local environment) through trials and errors, and adapt the channel selection strategy until reaching convergence. The proposed Q-learning algorithm is cast into high level and low level subproblems, in which the former finds in a decentralized way the channel allocation through Q-learning, while the latter computes the optimal power allocation. Investigations show that through learning, femtocells are not only able to self-organize with only local information, but also mitigate their interference towards the macrocell network. <s> BIB001 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> e: Hierarchical game <s> This paper investigates price-based resource allocation strategies for two-tier femtocell networks, in which a central macrocell is underlaid with distributed femtocells, all operating over the same frequency band. Assuming that the macrocell base station (MBS) protects itself by pricing the interference from femtocell users, a Stackelberg game is formulated to study the joint utility maximization of the macrocell and femtocells subject to a maximum tolerable interference power constraint at the MBS. Two practical femtocell network models are investigated: sparsely deployed scenario for rural areas and densely deployed scenario for urban areas. For each scenario, two pricing schemes: uniform pricing and non-uniform pricing, are proposed. The Stackelberg equilibriums for the proposed games are characterized, and an effective distributed interference price bargaining algorithm with guaranteed convergence is proposed for the uniform-pricing case. Numerical examples are presented to verify the proposed studies. It is shown that the proposed schemes are effective in resource allocation and macrocell protection for both the uplink and downlink transmissions in spectrum-sharing femtocell networks. <s> BIB002 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> e: Hierarchical game <s> We consider a cognitive radio system with one primary (licensed) user and multiple secondary (unlicensed) users. Given the interference temperature constraint, the secondary users compete for the available spectrum to fulfill their own communication need. Borrowing the concept of price from market theory, we develop a decentralized Stackelberg game formulation for power allocation. In this scheme, the primary user (leader) announces prices for the available tones such that a system utility is maximized. Using the announced prices, secondary users (followers) compete for the available bandwidth to maximize their own utilities. We show that this Stackelberg game is polynomial time solvable under certain channel conditions. When the individual power constraints of secondary users are inactive (due to strict interference temperature constraint), the proposed distributed power control method is decomposable across the tones and unlike normal water-filling it respects the interference temperature constraints of the primary user. When individual power constraints are active, we propose a distributed approach that solves the problem under an aggregate interference temperature constraint. Moreover, we propose a dual decomposition based power control method and show that it solves the Stackelberg game asymptotically when the number of tones becomes large. <s> BIB003 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> e: Hierarchical game <s> In communication systems where users share common resources, selfish behavior usually results in suboptimal resource utilization. There have been extensive works that model communication systems with selfish users as one-shot games and propose incentive schemes to achieve Pareto-optimal outcomes. However, in many communication systems, due to strong negative externalities among users, the sets of feasible payoffs in one-shot games are nonconvex. Thus, it is possible to expand the set of feasible payoffs by having users choose different action profiles in an alternating manner. In this paper, we formulate a model of repeated games with intervention. First, by using repeated games we can convexify the set of feasible payoffs in one-shot games. Second, by using intervention in repeated games we can achieve a larger set of equilibrium payoffs and loosen requirements for users' patience to achieve a target payoff. We study the problem of maximizing a welfare function defined on users' payoffs. We characterize the limit set of equilibrium payoffs. Given the optimal equilibrium payoff, we derive the sufficient condition on the discount factor and the intervention capability to achieve it, and design corresponding equilibrium strategies. We illustrate our analytical results with power control and flow control. <s> BIB004 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> e: Hierarchical game <s> In this article, we investigate self-organizing optimization for cognitive small cells (CSCs), which have the ability to sense the environment, learn from historical information, make intelligent decisions, and adjust their operational parameters. By exploring the inherent features, some fundamental challenges for self-organizing optimization in CSCs are presented and discussed. Specifically, the dense and random deployment of CSCs brings about some new challenges in terms of scalability and adaptation; furthermore, the uncertain, dynamic, and incomplete information constraints also impose some new challenges in terms of convergence and robustness. For providing better service to users and improving resource utilization, four requirements for self-organizing optimization in CSCs are presented and discussed. Following the attractive fact that the decisions in game-theoretic models are exactly coincident with those in self-organizing optimization (i.e., distributed and autonomous), we establish a framework of gametheoretic solutions for self-organizing optimization in CSCs and propose some featured game models. Specifically, their basic models are presented, some examples are discussed, and future research directions are given. <s> BIB005
|
All the game models discussed in previous sections are highlighting the interaction of players with equal priority or having no hierarchy due to distributed structure of the network. Moreover, the CIoT system are hierarchical in nature and require the hierarchical game based algorithms for objects interlinking and management. The most useful game model for hierarchical network is stackelberg game BIB002 . In this game model, the game players are represented by the leaders and several followers who compete for certain resource. In this game, the leader takes an action based on a situation and the followers take actions according to the leader or follow its own actions. Moreover, both the leaders and followers can not deviate from the stackelberg Equilibria BIB003 . In some cases, both leader and followers maximizes their utility function or leader has no utility and his (resource user) aim is to maximize the accumulated utility of the followers. The results of these researches clearly showed that the efficiency of NE can be improved significantly by using Stackelberg equilibria. In BIB004 , Xiao et al. proposed the intervention rule for hierarchical game model. In this model, the leader selects the proposed intervention rule and then the followers choose their action according to the selected intervention rule of the leader. This hierarchical game model is called intervention game model in which leader (resource user) regulates the resources which are shared among the followers. In BIB005 , Xu et al. use the hierarchical game to propose cluster-based hierarchical structure for self-organization and optimization in large-scale networks. They presented the rule for calculating the computational complexity in cluster-based hierarchical game based on Q-learning approach BIB001 . The results showed that the cluster based hierarchical approach is most suitable for dense network. The need of the hour is to design the application for distributed channel selection in CIoT system which is based on hierarchical games for better resource allocation and joint power control.
|
A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> B. MARKOVIAN DECISION PROCESS <s> Opportunistic spectrum access (OSA) has been regarded as the most promising approach to solve the paradox between spectrum scarcity and waste. Intelligent decision making is key to OSA and differentiates it from previous wireless technologies. In this article, a survey of decision-theoretic solutions for channel selection and access strategies for OSA system is presented. We analyze the challenges facing OSA systems globally, which mainly include interactions among multiple users, dynamic spectrum opportunity, tradeoff between sequential sensing cost and expected reward, and tradeoff between exploitation and exploration in the absence of prior statistical information. We provide comprehensive review and comparison of each kind of existing decision-theoretic solution, i.e., game models, Markovian decision process, optimal stopping problem and multi-armed bandit problem. We analyze their strengths and limitations and outline further research for both technical contents and methodologies. In particular, these solutions are critically analyzed in terms of information, cost and convergence speed, which are key concerns for practical implementation. Moreover, it is noted that each kind of existing decision-theoretic solution mainly addresses one aspect of the challenges, which implies that two or more kinds of decision-theoretic solutions should be incorporated to address more challenges simultaneously. <s> BIB001 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> B. MARKOVIAN DECISION PROCESS <s> In this article, we investigate self-organizing optimization for cognitive small cells (CSCs), which have the ability to sense the environment, learn from historical information, make intelligent decisions, and adjust their operational parameters. By exploring the inherent features, some fundamental challenges for self-organizing optimization in CSCs are presented and discussed. Specifically, the dense and random deployment of CSCs brings about some new challenges in terms of scalability and adaptation; furthermore, the uncertain, dynamic, and incomplete information constraints also impose some new challenges in terms of convergence and robustness. For providing better service to users and improving resource utilization, four requirements for self-organizing optimization in CSCs are presented and discussed. Following the attractive fact that the decisions in game-theoretic models are exactly coincident with those in self-organizing optimization (i.e., distributed and autonomous), we establish a framework of gametheoretic solutions for self-organizing optimization in CSCs and propose some featured game models. Specifically, their basic models are presented, some examples are discussed, and future research directions are given. <s> BIB002
|
MDP provides an excellent framework for modeling decision making in CIoT for several objects of multi-periods. Basically, it calculates the probability that the system will transfer from one state to another state based on maximum discounted reward for optimal policy calculation. The probability of outcome does not rely on the previous state of the players. Markov decision process describes an environment for reinforcement learning in which the environment is fully observable i.e. the current state completely characterizes the process. The problem with sequential decision making of multi-periods in CIoT can be well-solved by Markov decision process (MDP) models. The most attractive feature of this technique is the formulation of spectrum sensing and channel selection as an MDP problem. This section highlights the brief overview of three basic types of models namely: the discrete time MDP (DTMDP), partially observable MDP (POMDP) and constrained MDP BIB001 , BIB002 that enables intelligent decision making in CIoT.
|
A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> 1) DISCRETE TIME MARKOVIAN DECISION PROCESS (DTMP) <s> This paper surveys the field of reinforcement learning from a computer-science perspective. It is written to be accessible to researchers familiar with machine learning. Both the historical basis of the field and a broad selection of current work are summarized. Reinforcement learning is the problem faced by an agent that learns behavior through trial-and-error interactions with a dynamic environment. The work described here has a resemblance to work in psychology, but differs considerably in the details and in the use of the word ``reinforcement.'' The paper discusses central issues of reinforcement learning, including trading off exploration and exploitation, establishing the foundations of the field via Markov decision theory, learning from delayed reinforcement, constructing empirical models to accelerate learning, making use of generalization and hierarchy, and coping with hidden state. It concludes with a survey of some implemented systems and an assessment of the practical utility of current methods for reinforcement learning. <s> BIB001 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> 1) DISCRETE TIME MARKOVIAN DECISION PROCESS (DTMP) <s> Dynamic spectrum access has been a subject of extensive research activity in recent years. The increasing volume of literature calls for a deeper understanding of the characteristics of current spectrum utilization. In this paper we present a detailed spectrum measurement study, with data collected in the 20MHz to 3GHz spectrum band and at four locations concurrently in South China. We examine the first and second order statistics of the collected data, including channel occupancy/vacancy statistics, channel utilization within each individual wireless service, and the temporal, spectral, and spatial correlation of these measures. Main findings include that the channel vacancy durations follow an exponential-like distribution, but are not independently distributed over time, and that significant spectral and spatial correlations are found between channels of the same service. We then exploit such spectrum correlation to develop a 2-dimensional frequent pattern mining algorithm that can accurately predict channel availability based on past observations. <s> BIB002 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> 1) DISCRETE TIME MARKOVIAN DECISION PROCESS (DTMP) <s> Wireless sensor networks (WSNs) consist of autonomous and resource-limited devices. The devices cooperate to monitor one or more physical phenomena within an area of interest. WSNs operate as stochastic systems because of randomness in the monitored environments. For long service time and low maintenance cost, WSNs require adaptive and robust methods to address data exchange, topology formulation, resource and power optimization, sensing coverage and object detection, and security challenges. In these problems, sensor nodes are used to make optimized decisions from a set of accessible strategies to achieve design goals. This survey reviews numerous applications of the Markov decision process (MDP) framework, a powerful decision-making tool to develop adaptive algorithms and protocols for WSNs. Furthermore, various solution methods are discussed and compared to serve as a guide for using MDPs in WSNs. <s> BIB003
|
In DTMP model, the process can be observed periodically and classified into one of the possible states, an action from the possible action is taken and as a result, the process will return a state in which the user has to switch. The basic elements of Discrete time MDP model are defined as: • The time interval k = 0, 1, 2, . . . • Set of states s ∈ S. • Set of actions a ∈ A. • Reward function R : S ×A → R. Moreover, the received reward is represented as R(s, a) when player perform action a at state s. In this model, when a player performs an action a at state s, the system will transit from state s to s in the next period using the stochastic transition model probability Pr(s |s, a) . The game player calculates the maximum discounted reward for state s by mapping the states to actions for optimal policy π(s). It is given by where E [.] represents the expected operation and optimal discount factor is represented by γ ∈ [0, 1]. Moreover, the maximum discounted future reward for state and action pair (s, a) is calculated by the Q-value function which is given by This method provides the optimal policy which is stationary and deterministic. Here, the stationary represents the optimal action which is denoted by π * (s) for every state s, while deterministic represents the single action per state which is π * (s) : where the value iteration or known as Q-learning technique is used to calculate the optimal Q-value Q * (s, a) for each state-action pair (s, a) BIB003 , BIB001 . Basically, the system states are observed by each of the player in decision period for DTMDP model. For instance in our practical example of real world scenario, the intelligent decision maker periodically observes the health status or state s of Bob and takes action a when he is having a heart attack. In short, the decision maker will facilitate the game players to change their states s from normal to new state s of heart attack on receiving the recent reading from the sensors. Therefore, the utilization of DTMP algorithms by the decision maker will enable Bob to take action a to transit from normal state s to heart attack state s in the next period using the stochastic transition model probability Pr(s |s, a). The biggest benefit of implementing the DTMDP models in decision making layer is that the system state is completely observed by the players in each decision period. The decision maker also facilitates the game player to calculate the maximum discounted reward for state s by mapping the states to action for optimal policy π(s) as discussed previously. Moreover, the implementation of Q-learning technique is used to calculate the optimal Q-value Q * (s, a) for each state-action pair (s, a) BIB003 , BIB001 . The decision maker facilitates each player for taking an appropriate action according to the health status of each player. The addition of decision making layer into the framework and utilization of DTMP algorithm facilitates Bob for taking action when his health status changes from normal to heart attack by selecting the related CVO and channel for communication by using Q learning technique to alert the medical assistance system for sending help to the user. Moreover, the utilization of DTMP algorithm also facilitates the decision maker to inform the doctor about his patient's health status and also alert the hospital staff in case of any emergency by selecting the appropriate CVO from its repository and appropriate channel for communication. Finally, the other health saving support systems like ambulance service, paramedic staff etc. to name a few are also informed about the patient's current health status by the intelligent decision maker using CVO and appropriate channel selection. In short, this intelligent CIoT system is connected to all the emergency service provider systems to save the life of any patient in emergency. The previous literature has shown that the correlation between different objects can be calculated using availability vector which is capable of identifying the state information of each object. Moreover, this model is also used to predict the channel's availability as well as user activity. Chen et al. BIB002 formulates MDP model using availability vector to calculate the activities of PU. The results showed that the usage collision probability and channel availability vector facilitate the SUs to sense multiple channels sequentially in a slot and also reduce the sensing overhead.
|
A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> 2) PARTIALLY OBSERVABLE MDP (POMDP) <s> Coordination of agent activities is a key problem in multiagent systems. Set in a larger decision theoretic context, the existence of coordination problems leads to difficulty in evaluating the utility of a situation. This in turn makes defining optimal policies for sequential decision processes problematic. We propose a method for solving sequential multi-agent decision problems by allowing agents to reason explicitly about specific coordination mechanisms. We define an extension of value iteration in which the system's state space is augmented with the state of the coordination mechanism adopted, allowing agents to reason about the short and long term prospects for coordination, the long term consequences of (mis)coordination, and make decisions to engage or avoid coordination problems based on expected value. We also illustrate the benefits of mechanism generalization. <s> BIB001 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> 2) PARTIALLY OBSERVABLE MDP (POMDP) <s> We propose decentralized cognitive MAC protocols that allow secondary users to independently search for spectrum opportunities without a central coordinator or a dedicated communication channel. Recognizing hardware and energy constraints, we assume that a secondary user may not be able to perform full-spectrum sensing or may not be willing to monitor the spectrum when it has no data to transmit. We develop an analytical framework for opportunistic spectrum access based on the theory of partially observable Markov decision process (POMDP). This decision-theoretic approach integrates the design of spectrum access protocols at the MAC layer with spectrum sensing at the physical layer and traffic statistics determined by the application layer of the primary network. It also allows easy incorporation of spectrum sensing error and constraint on the probability of colliding with the primary users. Under this POMDP framework, we propose cognitive MAC protocols that optimize the performance of secondary users while limiting the interference perceived by primary users. A suboptimal strategy with reduced complexity yet comparable performance is developed. Without additional control message exchange between the secondary transmitter and receiver, the proposed decentralized protocols ensure synchronous hopping in the spectrum between the transmitter and the receiver in the presence of collisions and spectrum sensing errors <s> BIB002 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> 2) PARTIALLY OBSERVABLE MDP (POMDP) <s> Opportunistic spectrum access (OSA) that allows secondary users to independently search for and exploit instantaneous spectrum availability is considered. The design objective is to maximize the throughput of a secondary user while limiting the probability of colliding with primary users. Integrated in the joint design are three basic components: a spectrum sensor that identifies spectrum opportunities, a sensing strategy that determines which channels in the spectrum to sense, and an access strategy that decides whether to access based on potentially erroneous sensing outcomes. This joint design is formulated as a constrained partially observable Markov decision process (POMDP), and a separation principle is established. The separation principle reveals the optimality of myopic policies for the design of the spectrum sensor and the access strategy, leading to closed-form optimal solutions. Furthermore, it decouples the design of the sensing strategy from that of the spectrum sensor and the access strategy, and reduces the constrained POMDP to an unconstrained one. Numerical examples are provided to study the tradeoff between sensing time and transmission time, the interaction between the physical layer spectrum sensor and the MAC layer sensing and access strategies, and the robustness of the ensuing design to model mismatch. <s> BIB003 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> 2) PARTIALLY OBSERVABLE MDP (POMDP) <s> This paper considers a scenario in which a secondary user (SU) opportunistically accesses a channel allocated to some primary network (PN) that switches between idle and active states in a time-slotted manner. At the beginning of each time slot, SU can choose to stay idle or to carry out spectrum sensing to detect the state of PN. If PN is detected to be idle, SU can carry out data transmission. Spectrum sensing consumes time and energy and introduces false alarms and mis-detections. The objective is to dynamically decide, for each time slot, whether SU should stay idle or carry out sensing, and if so, for how long, to maximize the expected reward. We formulate this as a partially observable Markov decision process and prove important properties of the optimal control policies. Heuristic control policies with low complexity and good performance are also proposed. Numerical results show the significant performance gain of our dynamic control approach for opportunistic spectrum access. <s> BIB004 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> 2) PARTIALLY OBSERVABLE MDP (POMDP) <s> The focus of this paper is on solving multi-robot planning problems in continuous spaces with partial observability. Decentralized Partially Observable Markov Decision Processes (Dec-POMDPs) are general models for multi-robot coordination problems, but representing and solving Dec-POMDPs is often intractable for large problems. To allow for a high-level representation that is natural for multi-robot problems and scalable to large discrete and continuous problems, this paper extends the Dec-POMDP model to the Decentralized Partially Observable Semi-Markov Decision Process (Dec-POSMDP). The Dec-POSMDP formulation allows asynchronous decision-making by the robots, which is crucial in multi-robot domains. We also present an algorithm for solving this Dec-POSMDP which is much more scalable than previous methods since it can incorporate closed-loop belief space macro-actions in planning. These macro-actions are automatically constructed to produce robust solutions. The proposed method's performance is evaluated on a complex multi-robot package delivery problem under uncertainty, showing that our approach can naturally represent multi-robot problems and provide high-quality solutions for large-scale problems. <s> BIB005 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> 2) PARTIALLY OBSERVABLE MDP (POMDP) <s> Partially Observable Markov Decision Processes (POMDPs) provide a rich framework for sequential decision-making under uncertainty in stochastic domains. However, solving a POMDP is often intractable except for small problems due to their complexity. Here, we focus on online approaches that alleviate the computational complexity by computing good local policies at each decision step during the execution. Online algorithms generally consist of a lookahead search to find the best action to execute at each time step in an environment. Our objectives here are to survey the various existing online POMDP methods, analyze their properties and discuss their advantages and disadvantages; and to thoroughly evaluate these online approaches in different environments under various metrics (return, error bound reduction, lower bound improvement). Our experimental results indicate that state-of-the-art online heuristic search methods can handle large POMDP domains efficiently. <s> BIB006
|
In POMDP, the agents partially observe the state of the system and then the dynamics of the system are determined by markovian decision process. Basically, the agents rely on probability distribution over set of possible states, based on a set of observation and probabilities of the Markovian process. In short, the agents partially observe the state and are not sure about the current state or current state emit observation. Moreover, the system states are partially perceived by each player during decision period for POMDP model. Such observation and unknown states create problems for the MDP. In this technique, the main goal for an agent is to choose such actions at each time step which will result into the maximization of its expected future discounted reward: As discussed in the previous section, the agents are only concerned about the largest expected reward and perform the action based on this reward when γ = 0. On the other hand, it tries to maximize the expected sum of future rewards when γ = 1. One of the solution for this problem is the interaction of the user with environment and collecting the observation. Then, the agent has to update its belief in true state by updating the probability distribution of its current state. This can be achieved by creating the vector which includes the conditional probability of continuous updating of each state of the system after each decision BIB006 . Hence, the optimal policy can be calculated by the user based on the maintained belief vector BIB006 , BIB001 . In this model, every agent has to wait at some state for decision update by the other agents and then choose individual actions simultaneously according to decision. Moreover, the agents keep the record of each state and decision through information exchange mechanism. An ideal solution for distributed decision approach for multiple cooperative agents is provided by the Decentralized Partially Observable MDP (DEC-POMDP) . This intelligent model allows each agent to observe only a small part of the system state or each agent can only access local information. Moreover, the joint optimal policy based on approximation of agents is intractable because the decision made by each agent is independent and they do not know the actions and states of other agents BIB005 . Finally, This model enables each agent to observe its local state instead of all the local and global states, particularly suiting the CIoT requirement in application areas like smart cities, which consist of various heterogeneous objects and dynamic services. In BIB002 , Zhao et al. have presented the framework and algorithm which is capable of perfect and imperfect spectrum sensing. The simulation results proved that the throughput is improved significantly using this algorithm. In BIB003 , Chen et al. presented an intelligent strategy as a constrained POMDP for jointly sensing and accessing the OSA system. Basically, they present the separation principle which confirms that the optimal strategy of sensing and access leads to optimal solution. In , Unnikrishnan and Veeravalli have used the greedy algorithm for channel selection and derived an optimal policy for calculating the channel availability statistics. This algorithm will enable the user to calculate the real time statistics to ensure that the collision constraint is minimized. In BIB004 , Hoang et al. has calculated the spectrum sensing and access control problem. The results showed that the total reward of a slot can be calculated by sensing duration and probability that the channel is idle. This reward is then used to calculate the false alarm, mis-detection and energy consumption in that particular slot. Basically, the author calculates the duration of sensing in each time slot to maximize the net reward.
|
A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> 3) CONSTRAINED MDP (CMDP) <s> The problem of opportunistic access of parallel channels occupied by primary users is considered. Under a continuous-time Markov chain modeling of the channel occupancy by the primary users, a slotted transmission protocol for secondary users using a periodic sensing strategy with optimal dynamic access is proposed. To maximize channel utilization while limiting interference to primary users, a framework of constrained Markov decision processes is presented, and the optimal access policy is derived via a linear program. Simulations are used for performance evaluation. It is demonstrated that periodic sensing yields negligible loss of throughput when the constraint on interference is tight. <s> BIB001 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> 3) CONSTRAINED MDP (CMDP) <s> The problem of cognitive access of multiple primary channels by multiple cognitive users is considered. The primary transmission on each channel is modeled by a continuous time Markov on-off process. Cognitive access of the primary channels is realized via channel sensing. Each cognitive user adopts a slotted transmission structure, senses one channel in each slot and makes the transmission decision based on the sensing outcome. The cognitive transmissions in each channel are subject to collision constraints that limit their interference to the primary users. The maximum throughput region of this multiuser cognitive network is characterized by establishing inner and outer bounds. Under tight collision constraints, the inner bound is obtained by a simple orthogonalized periodic sensing with memoryless access policy and its generalizations. The outer bound, on the other hand, is obtained by relating the sum throughput with the interference limits. It is shown that when collision constraints are tight, the outer and inner bounds match. This maximum throughput region result is further extended by a generalized periodic sensing scheme with a mechanism of timing sharing. Under general collision constraints, another outer bound is obtained via Whittle's relaxation and another inner bound obtained via Whittle's index sensing policy with memoryless access. Packet level simulations are used to validate the analytical performance prediction. <s> BIB002 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> 3) CONSTRAINED MDP (CMDP) <s> We consider the problem of optimal channel access to provide quality of service (QoS) for data transmission in cognitive vehicular networks. In such a network, the vehicular nodes can opportunistically access the radio channels (referred to as shared-use channels) which are allocated to licensed users. Also, they are able to reserve a channel for dedicated access (referred to as exclusive-use channel) for data transmission. A channel access management framework is developed for cluster-based communication among vehicular nodes. This framework has three components: opportunistic access to shared-use channels, reservation of exclusive-use channel, and cluster size control. A hierarchical optimization model is then developed for this framework to obtain the optimal policy. The objective of the optimization model is to maximize the utility of the vehicular nodes in a cluster and to minimize the cost of reserving exclusive-use channel while the QoS requirements of data transmission (for vehicle-to-vehicle and vehicle-to-roadside communications) are met, and also the constraint on probability of collision with licensed users is satisfied. This hierarchical optimization model comprises of two constrained Markov decision process (CMDP) formulations - one for opportunistic channel access, and the other for joint exclusive-use channel reservation and cluster size control. An algorithm is presented to solve this hierarchical optimization model. Performance evaluation results show the effectiveness of the optimal channel access management policy. The proposed optimal channel access management framework will be useful to support mobile computing and intelligent transportation system (ITS) applications in vehicular networks. <s> BIB003
|
This intelligent model is capable of solving the constraints imposed by the user in sequential decision problems by analyzing them and modelling them in an appropriate way to remove the constraints. Practically, there are always constraints imposed by the users in practical scenarios e.g. the interference imposed by users etc. to name a few. The sequential decision making problems are well managed by the CMDP model. The CMDP model is quite close to the MDP except the difference is that there is additional computational cost in calculating the policies. The cost for calculating the optimal policy p is represented by: where cost functions are represented by vector D(u) having dimension Nc, of constant values. The Constrained MDP is equal to the linear program by using the discounted cost and VOLUME 6, 2018 is represented as: where the cost vectors are represented by C and D n with dimension |K |. On the other hand, the ρ vector is defined as: where s is the state at which the action a is performed for calculating the probability of selection of each element ρ(s, a) ∈ Q. Hence, the summation of ρ(s, a) converges to 1 for all ρ ∈ Q. We can calculate the stationary optimal policy p by: The previous literature showed that the CDMP can be used to maximize the throughput along with minimizing the collision probability requirement BIB001 . Moreover, the new attractive heuristic algorithms like memory-less access and greedy access can be utilized for optimal solution of decision making in CIoT. The memory-less access algorithm BIB001 is optimal in the scenarios where the collision constraints are tight. Moreover, the maximum throughput region can also be obtained using this algorithms BIB002 . The result of this study showed that in case of tight collision constraints, the outer and inner bounds match. These studies modeled Pu traffic but were unable to cater the QoS of the SUs. In BIB003 , Niyato et al. have considered not only PU traffic but also catered SU to calculate the maximum probability of collision, maximum packet loss and packet delay for vehicular ad-hoc nodes. Hence, this technique can enable the opportunistic spectrum access, object selection for channel reservation and clustering control from hierarchical Markovian model.
|
A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> 1) NO RECALL OSP (NR-OSP) <s> In this paper, we study the gains from opportunistic spectrum usage when neither sender or receiver are aware of the current channel conditions in different frequency bands. Hence to select the best band for sending data, nodes first need to measure the channel in different bands which takes time away from sending actual data. We analyze the gains from opportunistic band selection by deriving an optimal skipping rule, which balances the throughput gain from finding a good quality band with the overhead of measuring multiple bands. We show that opportunistic band skipping is most beneficial in low signal to noise scenarios, which are typically the cases when the node throughput in single-band (no opportunism) system is the minimum. To study the impact of opportunism on network throughput, we devise a CSMA/CA protocol, multi-band opportunistic auto rate (MOAR), which implements the proposed skipping rule on a per node pair basis. The proposed protocol exploits both time and frequency diversity, and is shown to result in typical throughput gains of 20% or more over a protocol which only exploits time diversity, opportunistic auto rate (OAR). <s> BIB001 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> 1) NO RECALL OSP (NR-OSP) <s> This paper investigates the optimal sensing order problem in multi-channel cognitive medium access control with opportunistic transmissions. The scenario in which the availability probability of each channel is known is considered first. In this case, when the potential channels are identical (except for the availability probabilities) and independent, it is shown that, although the intuitive sensing order (i.e., descending order of the channel availability probabilities) is optimal when adaptive modulation is not used, it does not lead to optimality in general with adaptive modulation. Thus, a dynamic programming approach to the search for an optimal sensing order with adaptive modulation is presented. For some special cases, it is proved that a simple optimal sensing order does exist. More complex scenarios are then considered, e.g., in which the availability probability of each channel is unknown. Optimal strategies are developed to address the challenges created by this additional uncertainty. Finally, a scheme is developed to address the issue of sensing errors. <s> BIB002 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> 1) NO RECALL OSP (NR-OSP) <s> In cognitive radio networks (CRNs), effective and efficient channel exploitation is imperative for unlicensed secondary users to seize available network resources and improve resource utilization. In this paper, we propose a simple channel sensing order for secondary users in multi-channel CRNs without a priori knowledge of primary user activities. By sensing the channels according to the descending order of their achievable rates with optimal stopping, we show that the proposed channel exploitation approach is efficient yet effective in elevating throughput and resource utilization. Simulation results show that our proposed channel exploitation approach outperforms its counterparts by up to 18% in a single-secondary user pair scenario. In addition, we investigate the probability of packet transmission collision in a multi-secondary user pair scenario, and show that the probability of collision decreases as the number of channels increases and/or the number of secondary user pairs decreases. It is observed that the total throughput and resource utilization increase with the number of secondary user pairs due to increased transmission opportunities and multi-user diversity. Our results also demonstrate that resource utilization can be further improved via the proposed channel exploitation approach when the number of secondary user pairs approaches the number of channels. <s> BIB003 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> 1) NO RECALL OSP (NR-OSP) <s> In multichannel system, user could keep transmitting over an instantaneous “on peak” channel by opportunistically accessing and switching among channels. Previous studies rely on constant transmission duration, which would fail to leverage more opportunities in time and frequency domain. In this paper, we consider opportunistic channel accessing/releasing scheme in multichannel system with Rayleigh fading channels. Our main goal is to derive a throughput-optimal strategy for determining when and which channel to access and when to release it. We formulate this real-time decision-making process as a two-dimensional optimal stopping problem. We prove that the two-dimensional optimal stopping rule can be reduced to a simple threshold-based policy. Leveraging the absorbing Markov chain theory, we obtain the optimal threshold as well as the maximum achievable throughput with computational efficiency. Numerical and simulation results show that our proposed channel utilization scheme achieves up to 140 percent throughput gain over opportunistic transmission with a single channel and up to 60 percent throughput gain over opportunistic channel access with constant transmission duration. <s> BIB004
|
In NR-OSP models, the decision maker performs decision on the basis of current variable by simplifying it as y n (x n ). Basically, the decision making is done on the basis of currently observed variable while the recalling of previously observed variable is not allowed. The backward induction model can be easily created for NR-OSP models. In BIB001 , Sabharwal et al. have explained in detail about channel having good quality and also throw light on the issue of time overhead for exploring multiple channels. They have considered the Rayleigh block-fading identically and independently. They have implemented the NR-OSP model to explore the channels to achieve the expected throughput. In BIB002 , Jiang et al. have utilized the NR-OSP model to explore the sensing order and channel quality. The results showed when the probabilities of channels availability are in descending order then the optimal solution for channel sensing can not be guaranteed. Therefore, they have introduced the dynamic programming method for finding the optimal sensing order but this will lead to higher computational complexity. In BIB003 , Cheng and Zhuang have used NR-OSP to sense each channel according to its descending achievable rate. To summarize all these OSP model, we conclude that the user released the accessed channels for a pre-defined duration and then followed the same sensing process in next iteration. In BIB004 , Li et al. have presented two dimensional NR-OSP model for sequential sensing and channel access in time frequency domain. Basically, they have used NR-OSP model and finite state Markovian channels for calculating when and which channel has to be accessed and released by the user. They have formulated the problem according to three actions i.e. firstly utilizes the current available channel and then continue to sense the available channels and finally when the new channel is sensed then this current channel is freed accordingly.
|
A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> 2) RECALL OSP (R-OSP) <s> In this study we consider optimal opportunistic spectrum access (OSA) policies for a transmitter in a multichannel wireless system, where a channel can be in one of multiple states. Each channel state is associated with either a probability of transmission success or a transmission rate. In such systems, the transmitter typically has partial information concerning the channel states, but can deduce more by probing individual channels, e.g. by sending control packets in the channels, at the expense of certain resources, e.g., energy and time. The main goal of this work is to derive optimal strategies for determining which channels to probe (in what sequence) and which channel to use for transmission. We consider two problems within this context,allthe constant data time (CDT) and the constant access time (CAT) problems. For both problems, we derive key structural properties of the corresponding optimal strategy. In particular, we show that it has a threshold structure and can be described by an index policy. We further show that the optimal CDT strategy can only take on one of three structural forms. Using these results we present a two-step lookahead CDT (CAT) strategy. This strategy is shown to be optimal for a number of cases of practical interest. We examine its performance under a class of practical channel models via numerical studies. <s> BIB001 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> 2) RECALL OSP (R-OSP) <s> Radio spectrum resource is of fundamental importance for wireless communication. Recent reports show that most available spectrum has been allocated. While some of the spectrum bands (e.g., unlicensed band, GSM band) have seen increasingly crowded usage, most of the other spectrum resources are underutilized. This drives the emergence of open spectrum and dynamic spectrum access concepts, which allow unlicensed users equipped with cognitive radios to opportunistically access the spectrum not used by primary users. Cognitive radio has many advanced features, such as agilely sensing the existence of primary users and utilizing multiple spectrum bands simultaneously. However, in practice such capabilities are constrained by hardware cost. In this paper, we discuss how to conduct efficient spectrum management in ad hoc cognitive radio networks while taking the hardware constraints (e.g., single radio, partial spectrum sensing and spectrum aggregation limit) into consideration. A hardware-constrained cognitive MAC, HC-MAC, is proposed to conduct efficient spectrum sensing and spectrum access decision. We identify the issue of optimal spectrum sensing decision for a single secondary transmission pair, and formulate it as an optimal stopping problem. A decentralized MAC protocol is then proposed for the ad hoc cognitive radio networks. Simulation results are presented to demonstrate the effectiveness of our proposed protocol. <s> BIB002 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> 2) RECALL OSP (R-OSP) <s> This letter studies the channel exploration problem for opportunistic spectrum usage systems, where exploring state information of each channel consumes time and energy. We formulate this problem as an optimal stopping problem and propose a myopic rule with low complexity, called one stage look ahead (1-SLA), to solve it. Moreover, the optimality of the 1-SLA rule for the energy-efficient channel exploration problem is proved, and simulation results are provided to show the effectiveness of the 1-SLA rule. <s> BIB003 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> 2) RECALL OSP (R-OSP) <s> This letter investigates the problem of energy-efficient exploration and exploitation of multichannel diversity in spectrum sharing cognitive radio systems where the secondary user sequentially explores the channel state information on the licenced channels with time and energy consumptions. As the number of the explored channels increases, the achieved multichannel diversity gain increases and so does the exploration consumption. Thus, there is a fundamental tradeoff between the multichannel diversity gain and channel exploration overhead. To maximise the expected normalised capacity of the secondary user, we formulate this tradeoff as an optimal stopping problem and propose a myopic one-stage look-ahead rule to solve it. It is shown that the one-stage look-ahead rule is optimal in the low power region; moreover, it also has good performance in general power region. Simulation results show that the achievable normalised throughput differs greatly for different exploration overhead, which can be regarded as a distinct feature of spectrum sharing systems. Copyright © 2012 John Wiley & Sons, Ltd. <s> BIB004 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> 2) RECALL OSP (R-OSP) <s> This letter studies the problem of exploiting multichannel diversity in a spectrum sharing system, where the secondary user (SU) sequentially explores channel state information on the licensed channels with time consumption. To maximize the expected achievable throughput for the SU, we formulate this problem as an optimal stopping problem, whose objective is to choose the right channel to stop exploration based on the observed signal-to-noise ratio sequence. Moreover, we propose a myopic but optimal rule, called one-stage look-ahead rule, to solve the stopping problem. <s> BIB005
|
In this model, the usage of the previously observed variable is allowed and the decision is supported by the previously observed state. Basically, the R-OSP observes the previous variable to recall x k , k ≤ n by decision maker. The backward induction model is not feasible for this technique because of the large computational complexity which increases exponentially as the number of decision horizons increases. Therefore, we have to modify the NR-OSP with k-stage look ahead to overcome the computation complexity. This k-stage lookahead rule enables users to sense the successive k stages and then stop. The most intelligent and simple solution for this problem is known as 1-SLA rule which makes the truncated version of the problem BIB003 . Moreover,this rule provides an optimal solution for monotone R-OSP models . In BIB002 , Jia et al. have presented the R-OSP model for finding the trade-off between idle channels and multiple channels sensing overhead. They have applied k-SLA rule, where k = 1, 2 are used. The results have shown that 1-SLA rule is excellent because it is very close to optimal solution. In BIB001 , Chang and Liu have used the R-OSP model to calculate the optimal strategy using 2-SLA rule to achieve optimal solution within finite steps. Basically, they have taken three action i.e. they used an observed channel and sensed the unobserved channels and then used those unobserved channels using the 2-SLA rule to find the optimal solution. In , Kim and Giannakis have presented R-OSP model in which the SUs adopted the parallel sensing technique. The results have shown that the sensing capability increases when the sampling duration is increased which leads to more spectrum opportunities for SUs. Moreover, the collision constraints imposed by the PUs can also be minimized by using the dynamic programming to find the optimal rule for the choosing best time to stop sensing and the best set of channels to access. In BIB003 , Xu et al. have used R-OSP model for finding the energy-efficient channels. Basically, they have utilized the 1-SLA rule for the maximization of throughput. Finally, in BIB004 and BIB005 , the authors have proposed the R-OSP model using 1-SLA rule which enables the collision avoidance among SUs. Moreover, they have also presented the solution for trade-off between channel exploration and exploitation in OSA systems.
|
A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> D. MULTI-ARMED BANDIT PROBLEM <s> Reinforcement learning policies face the exploration versus exploitation dilemma, i.e. the search for a balance between exploring the environment to find profitable actions while taking the empirically best action as often as possible. A popular measure of a policy's success in addressing this dilemma is the regret, that is the loss due to the fact that the globally optimal policy is not followed all the times. One of the simplest examples of the exploration/exploitation dilemma is the multi-armed bandit problem. Lai and Robbins were the first ones to show that the regret for this problem has to grow at least logarithmically in the number of plays. Since then, policies which asymptotically achieve this regret have been devised by Lai and Robbins and many others. In this work we show that the optimal logarithmic regret is also achievable uniformly over time, with simple and efficient policies, and for all reward distributions with bounded support. <s> BIB001 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> D. MULTI-ARMED BANDIT PROBLEM <s> Resource allocation is an important issue in cognitive radio systems. It can be done by carrying out negotiation among secondary users. However, significant overhead may be incurred by the negotiation since the negotiation needs to be done frequently due to the rapid change of primary users' activity. In this paper, an Aloha-like spectrum access scheme without negotiation is considered for multi-user and multi-channel cognitive radio systems. To avoid collision incurred by the lack of coordination, each secondary user learns how to select channels according to its experience. Multi-agent reinforcement leaning (MARL) is applied in the framework of $Q$-learning by considering other secondary users as a part of the environment. A rigorous proof of the convergence of $Q$-learning is provided via the similarity between the $Q$-learning and Robinson-Monro algorithm, as well as the analysis of the corresponding ordinary differential equation (via Lyapunov function). The performance of learning (speed and gain in utility) is evaluated by numerical simulations. <s> BIB002
|
Multi-armed bandit (MAB) problem provides best learning technique for choosing one or more objects among several objects whose statistical information is unknown. Basically, it explores the statistics about resources during the decision process and maximizes the current reward on the basis of current estimated statistics. The classical MAB problem has players who are playing with K arms in equally divided time slots. The real-valued reward is obtained by choosing any one of the arm to play by the player at each time slot. The main purpose of MAB is to model a learning policy π based on the history information of decision to maximize the cumulative reward. The performance is evaluated by using regret matrix. This matrix calculates the reward loss of the policy and is written as: where the maximum expected reward is represented by µ * = max{µ k }, π(t) represents the selected arm at time t, and r tπ (t) shows the reward in time slots n. The main purpose is to make R π (T ) as smaller so that the time regret will to 0 i.e Consequently, this will result into maximum time-averaged reward. The solution for expected regret in term of linear arms is asymptotically logarithmic with time as O(K log T ) is proposed by Lai and Robbins . Their work is then further extended for multiple arm solution in BIB001 . Gittins and Gittins proposed index policies based on sample mean and briefly described the upper confidence bound case (UCB1) algorithm in . In this algorithm, the arm with the highest index is chosen at each decision epoch T, where the measured expected reward of arm k in each epoch T isμ k (T ) and m k represent arm which is played k number of times. The first part of the equation represents exploitation while the second part is for exploration because of the less chances of that arm to be played. MAB is extensively studied in previous literature and it is categorized into rested and restless MAB. In former type, the state of arm evolves when it is played otherwise it is frozen. While on the other hand, the later type involves the arm state evolution and it is independent of the actions. For restless MAB the optimal policy is calculated by using highest Gittin's index policy of playing the arm each time. While on the other hand, the restless MAB is calculated by using highest Whittle's index BIB002 policy of playing the arm each time. This technique can be implemented by using two models: i.i.d. MAB and restless MAB. These two models are explained briefly in next section.
|
A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> 1) I.I.D. MAB <s> We formulate and study a decentralized multi-armed bandit (MAB) problem. There are M distributed players competing for N independent arms. Each arm, when played, offers i.i.d. reward according to a distribution with an unknown parameter. At each time, each player chooses one arm to play without exchanging observations or any information with other players. Players choosing the same arm collide, and, depending on the collision model, either no one receives reward or the colliding players share the reward in an arbitrary way. We show that the minimum system regret of the decentralized MAB grows with time at the same logarithmic order as in the centralized counterpart where players act collectively as a single entity by exchanging observations and making decisions jointly. A decentralized policy is constructed to achieve this optimal order while ensuring fairness among players and without assuming any pre-agreement or information exchange among players. Based on a Time Division Fair Sharing (TDFS) of the M best arms, the proposed policy is constructed and its order optimality is proven under a general reward model. Furthermore, the basic structure of the TDFS policy can be used with any order-optimal single-player policy to achieve order optimality in the decentralized setting. We also establish a lower bound on the system regret growth rate for a general class of decentralized polices, to which the proposed policy belongs. This problem finds potential applications in cognitive radio networks, multi-channel communication systems, multi-agent systems, web search and advertising, and social networks. <s> BIB001 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> 1) I.I.D. MAB <s> We study a simple game-theoretic model for the spread of an innovation in a network. The diffiusion of the innovation is modeled as the dynamics of a coordination game in which the adoption of a common strategy between players has a higher payoff. Classical results in game theory provide a simple condition for the innovation to spread through the network. The present paper characterizes the rate of convergence as a function of graph structure. In particular, we derive a dichotomy between well-connected (e.g. random) graphs that show slow convergence and poorly connected, low dimensional graphs that show fast convergence. <s> BIB002 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> 1) I.I.D. MAB <s> The problem of cooperative allocation among multiple secondary users to maximize cognitive system throughput is considered. The channel availability statistics are initially unknown to the secondary users and are learnt via sensing samples. Two distributed learning and allocation schemes which maximize the cognitive system throughput or equivalently minimize the total regret in distributed learning and allocation are proposed. The first scheme assumes minimal prior information in terms of pre-allocated ranks for secondary users while the second scheme is fully distributed and assumes no such prior information. The two schemes have sum regret which is provably logarithmic in the number of sensing time slots. A lower bound is derived for any learning scheme which is asymptotically logarithmic in the number of slots. Hence, our schemes achieve asymptotic order optimality in terms of regret in distributed learning and allocation. <s> BIB003 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> 1) I.I.D. MAB <s> This paper considers the design of efficient strategies that allow cognitive users to choose frequency bands to sense and access among multiple bands with unknown parameters. First, the scenario in which a single cognitive user wishes to opportunistically exploit the availability of frequency bands is considered. By adopting tools from the classical bandit problem, optimal as well as low complexity asymptotically optimal solutions are developed. Next, the multiple cognitive user scenario is considered. The situation in which the availability probability of each channel is known is first considered. An optimal symmetric strategy that maximizes the total throughput of the cognitive users is developed. To avoid the possible selfish behavior of the cognitive users, a game-theoretic model is then developed. The performance of both models is characterized analytically. Then, the situation in which the availability probability of each channel is unknown a priori is considered. Low-complexity medium access protocols, which strike an optimal balance between exploration and exploitation in such competitive environments, are developed. The operating points of these low-complexity protocols are shown to converge to those of the scenario in which the availability probabilities are known. Finally, numerical results are provided to illustrate the impact of sensing errors and other practical considerations. <s> BIB004 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> 1) I.I.D. MAB <s> The fundamental problem of multiple secondary users contending for opportunistic spectrum access over multiple channels in cognitive radio networks has been formulated recently as a decentralized multi-armed bandit (D-MAB) problem. In a D-MAB problem there are M users and N arms (channels) that each offer i.i.d. stochastic rewards with unknown means so long as they are accessed without collision. The goal is to design a decentralized online learning policy that incurs minimal regret, defined as the difference between the total expected rewards accumulated by a model-aware genie, and that obtained by all users applying the policy. We make two contributions in this paper. First, we consider the setting where the users have a prioritized ranking, such that it is desired for the K-th-ranked user to learn to access the arm offering the K-th highest mean reward. For this problem, we present the first distributed policy that yields regret that is uniformly logarithmic over time without requiring any prior assumption about the mean rewards. Second, we consider the case when a fair access policy is required, i.e., it is desired for all users to experience the same mean reward. For this problem, we present a distributed policy that yields order-optimal regret scaling with respect to the number of users and arms, better than previously proposed policies in the literature. Both of our distributed policies make use of an innovative modification of the well known UCB1 policy for the classic multi-armed bandit problem that allows a single user to learn how to play the arm that yields the K-th largest mean reward. <s> BIB005
|
This model is used to model each arm as an i.i.d. process with an unknown distribution and unknown mean. In this model, the players are unaware about means or distribution of rewards from different arms. In this scenario, the players do not have any dedicated control channels for communication. Therefore, the case in which two players select the same arm, neither of them gets any reward. In BIB004 , Lai et al. have modeled each channel as an arm and applied a UCB1 VOLUME 6, 2018 algorithm on it. The results have shown that the probability of selecting a channel is directly proportional to the measured expected reward. This work is extended in BIB004 , Lai et al. have calculated the user-channel matching problem in which various transmission rates for multiple users are designed as MAB problem. In this paper, the authors performed a user and channel matching based on an arm and applied UCB1 algorithm which is then scaled with all the matching profiles. Basically, the authors have modified the UCB1 algorithm based on the correlation between different arms. The simulation results are order-optimal because the regret increases logarithmically in time while increases polynomially in the number of channels. In BIB003 , Anandkumar et al. have utilized the -greedy algorithm for the selection of collision free channel selection profile. Moreover, they have used adaptive random UCB1 algorithms which enables the SUs to randomly choose a channel only if the collision occurs in the previous slot otherwise it proceeds with UCB1 algorithm. The results showed that this policy is capable of handling the collision among multiple SUs which are selecting the same channel and has a logarithmic order. In BIB001 , Liu and Zhao have designed N -parallel algorithm which enables the players to access multiple channels simultaneously. In BIB002 , Montanari and Saberi have proposed a decentralized optimal learning policy which is order-optimal because it has a logarithmic order. In BIB005 , Gai and Krishnamachari have proposed selective learning policy based on two algorithms for the case of prioritized users and equal access policy for all users. Basically, they have provided a solution for determining the access policy in distributed system which is based on selective learning policy of k-th largest expected reward or UCB1.
|
A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> 2) RESTLESS MAB <s> We consider the task of optimally sensing a two-state Markovian channel with an observation cost and without any prior information regarding the channel's transition probabilities. This task is of interest in the field of cognitive radio as a model for opportunistic access to a communication network by a secondary user. The optimal sensing problem may be cast into the framework of model-based reinforcement learning in a specific class of partially observable Markov decision processes (POMDPs). We propose the Tiling Algorithm, an original method aimed at reaching an optimal tradeoff between the exploration (or estimation) and exploitation requirements. It is shown that this algorithm achieves finite horizon regret bounds that are as good as those recently obtained for multi-armed bandits and finite-state Markov decision processes (MDPs). <s> BIB001 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> 2) RESTLESS MAB <s> We consider an opportunistic spectrum access (OSA) problem where the time-varying condition of each channel (e.g., as a result of random fading or certain primary users' activities) is modeled as an arbitrary finite-state Markov chain. At each instance of time, a (secondary) user probes a channel and collects a certain reward as a function of the state of the channel (e.g., good channel condition results in higher data rate for the user). Each channel has potentially different state space and statistics, both unknown to the user, who tries to learn which one is the best as it goes and maximizes its usage of the best channel. The objective is to construct a good online learning algorithm so as to minimize the difference between the user's performance in total rewards and that of using the best channel (on average) had it known which one is the best from a priori knowledge of the channel statistics (also known as the regret). This is a classic exploration and exploitation problem and results abound when the reward processes are assumed to be iid. Compared to prior work, the biggest difference is that in our case the reward process is assumed to be Markovian, of which iid is a special case. In addition, the reward processes are restless in that the channel conditions will continue to evolve independent of the user's actions. This leads to a restless bandit problem, for which there exists little result on either algorithms or performance bounds in this learning context to the best of our knowledge. In this paper we introduce an algorithm that utilizes regenerative cycles of a Markov chain and computes a samplemean based index policy, and show that under mild conditions on the state transition probabilities of the Markov chains this algorithm achieves logarithmic regret uniformly over time, and that this regret bound is also optimal. We numerically examine the performance of this algorithm along with a few other learning algorithms in the case of an OSA problem with Gilbert-Elliot channel models, and discuss how this algorithm may be further improved (in terms of its constant) and how this result may lead to similar bounds for other algorithms. <s> BIB002 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> 2) RESTLESS MAB <s> Due to its application in numerous engineering problems, the restless multi-armed bandit (RMAB) problem is of fundamental importance in stochastic decision theory. However, solving the RMAB problem is well known to be PSPACE-hard, with the optimal policy usually intractable due to the exponential computation complexity. A natural alternative approach is to seek simple myopic policies which are easy to implement. This paper presents a generic study on the optimality of the myopic policy for the RMAB problem. More specifically, we develop three axioms characterizing a family of generic and practically important functions termed as regular functions. By performing a mathematical analysis based on the developed axioms, we establish the closed-form conditions under which the myopic policy is guaranteed to be optimal. The axiomatic analysis also illuminates important engineering implications of the myopic policy including the intrinsic tradeoff between exploration and exploitation. A case study is then presented to illustrate the application of the derived results in analyzing a class of RMAB problems arising from multi-channel opportunistic access. <s> BIB003
|
The models in which the arm state evolution is continuous and it is independent of the actions of players is called restless MAB. In Restless MAB, a player selects K arms out of N arms to play at each time. Basically, the state of the arm decides the reward for the player when the it is played and transmitted according to Markov rules regardless of their active or passive states. These Markovian dynamics are known to the players in advance. The main purpose of this model is to design an optimal arm selection policy which will maximize the long term reward. The performance of this model is measured by the regret for arm selection policy. Basically, the regret is measured in terms of reward loss for the scenario in which the player knows which K arms are the most rewarding arms and always plays these K best arms. The previous literature BIB001 showed that the PU activities can be modeled as a Markovian process and the channel quality exhibits the Markovian properties. However, these statistics must be known in advance in restless MAB models. In BIB002 , Tekin and Liu presented the restless MAB solution for single user in which they utilized the UCB1 algorithms. Basically, they have modified the previous UCB1 algorithm for formulating the regenerative cycle algorithms and mean index policy to get the expected regret in logarithmic order. In BIB003 , Wang and Chen have designed an intelligent myopic policy which provides an optimal solution to the problem. The literature has shown that one of the major limitation of previous restless MAB models is that they are capable of handling the single user scenarios and the policies which are used previously are not applicable to multi-user scenarios.
|
A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> A. INFORMATION SELECTION AND RETRIEVAL <s> Resource allocation is an important issue in cognitive radio systems. It can be done by carrying out negotiation among secondary users. However, significant overhead may be incurred by the negotiation since the negotiation needs to be done frequently due to the rapid change of primary users' activity. In this paper, an Aloha-like spectrum access scheme without negotiation is considered for multi-user and multi-channel cognitive radio systems. To avoid collision incurred by the lack of coordination, each secondary user learns how to select channels according to its experience. Multi-agent reinforcement leaning (MARL) is applied in the framework of $Q$-learning by considering other secondary users as a part of the environment. A rigorous proof of the convergence of $Q$-learning is provided via the similarity between the $Q$-learning and Robinson-Monro algorithm, as well as the analysis of the corresponding ordinary differential equation (via Lyapunov function). The performance of learning (speed and gain in utility) is evaluated by numerical simulations. <s> BIB001 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> A. INFORMATION SELECTION AND RETRIEVAL <s> We consider the task of optimally sensing a two-state Markovian channel with an observation cost and without any prior information regarding the channel's transition probabilities. This task is of interest in the field of cognitive radio as a model for opportunistic access to a communication network by a secondary user. The optimal sensing problem may be cast into the framework of model-based reinforcement learning in a specific class of partially observable Markov decision processes (POMDPs). We propose the Tiling Algorithm, an original method aimed at reaching an optimal tradeoff between the exploration (or estimation) and exploitation requirements. It is shown that this algorithm achieves finite horizon regret bounds that are as good as those recently obtained for multi-armed bandits and finite-state Markov decision processes (MDPs). <s> BIB002 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> A. INFORMATION SELECTION AND RETRIEVAL <s> This paper studies the tradeoff between throughput and multichannel diversity in multichannel opportunistic spectrum access (OSA) systems. We explicitly consider channel condition as well as the activities of the primary users. We assume that the primary users use the licensed channel in a slotted fashion and the secondary users can only explore one licensed channel at a time. The secondary users then sequentially explore multiple channels to find the best channel for transmission. However, channel exploration is time-consumed, which decreases effective transmission time in a slot. For single secondary user OSA systems, we formulate the channel exploration problem as an optimal stopping problem with recall, and propose a myopic but optimal approach. For multiple secondary user OSA systems, we propose an adaptive stochastic recall algorithm (ASRA) to capture the collision among multiple secondary users. It is shown that the proposed solutions in this paper achieve increased throughput both the scenario of both single secondary user as well as multiple secondary suers. <s> BIB003 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> A. INFORMATION SELECTION AND RETRIEVAL <s> Opportunistic spectrum access (OSA) has been regarded as the most promising approach to solve the paradox between spectrum scarcity and waste. Intelligent decision making is key to OSA and differentiates it from previous wireless technologies. In this article, a survey of decision-theoretic solutions for channel selection and access strategies for OSA system is presented. We analyze the challenges facing OSA systems globally, which mainly include interactions among multiple users, dynamic spectrum opportunity, tradeoff between sequential sensing cost and expected reward, and tradeoff between exploitation and exploration in the absence of prior statistical information. We provide comprehensive review and comparison of each kind of existing decision-theoretic solution, i.e., game models, Markovian decision process, optimal stopping problem and multi-armed bandit problem. We analyze their strengths and limitations and outline further research for both technical contents and methodologies. In particular, these solutions are critically analyzed in terms of information, cost and convergence speed, which are key concerns for practical implementation. Moreover, it is noted that each kind of existing decision-theoretic solution mainly addresses one aspect of the challenges, which implies that two or more kinds of decision-theoretic solutions should be incorporated to address more challenges simultaneously. <s> BIB004 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> A. INFORMATION SELECTION AND RETRIEVAL <s> In this paper, we design distributed spectrum access mechanisms with both complete and incomplete network information. We propose an evolutionary spectrum access mechanism with complete network information, and show that the mechanism achieves an equilibrium that is globally evolutionarily stable. With incomplete network information, we propose a distributed learning mechanism, where each user utilizes local observations to estimate the expected throughput and learns to adjust its spectrum access strategy adaptively over time. We show that the learning mechanism converges to the same evolutionary equilibrium on the time average. Numerical results show that the proposed mechanisms achieve up to 35 percent performance improvement over the distributed reinforcement learning mechanism in the literature, and are robust to the perturbations of users' channel selections. <s> BIB005
|
In CIoT, the information exchange is the most challenging part because of the dynamic, uncertain and always changing condition of the IoT ecosystem. Moreover, the problem of heterogeneous, non-linear, high dimensional and parallel processing of objects data adds more complexities to it. From practical aspect, the spectrum access systems must consider the environmental conditions which include the information about the spectrum, channel quality and traffic demand of user. The essential criteria is to apply the cognitive radio standards related to spectrum allocation, nodes selection, transmit power control and communication protocols for the effective usage of radio resources by the objects BIB004 . The information exchange in CIoT can be static or dynamic, local or global, realistic or unrealistic, and known or unknown. Firstly, the game theory provides an excellent solution for capturing the interaction and coordination among multiple users (objects) and their behaviors. The quantity, precision and accuracy jointly characterize the quality of information BIB005 . Quantity means how useful is the information which is obtained from a specific task. Precision is defined as the proportion of relevance of information among all the information. Finally, the accuracy is referred to as information relevance to the decision maker. The need of the hour is to practically design the coupled game learning solution which requires information about the users and uncoupled algorithms which are capable of decision making based on the local information of the users (objects). The dynamic and correlated states are well addressed by the theory of Markovian process BIB002 . Basically, it observes the state of each object, shares it with other objects and makes optimal joint policy to reach a global stable state. The scenarios in which the statistical information is unknown, a priori is well managed by the multi-arm bandit problem BIB003 . It provides considerably best learning techniques for choosing one or more objects among several objects whose statistical information is unknown. The problem of uncertain realization of unexplored channels is well addressed by the theory of optimal stopping problem BIB001 . In fact, this process observes each variable (object) based on its reward and performs a stopping action to minimize the cost.
|
A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> B. COST <s> Resource allocation is an important issue in cognitive radio systems. It can be done by carrying out negotiation among secondary users. However, significant overhead may be incurred by the negotiation since the negotiation needs to be done frequently due to the rapid change of primary users' activity. In this paper, an Aloha-like spectrum access scheme without negotiation is considered for multi-user and multi-channel cognitive radio systems. To avoid collision incurred by the lack of coordination, each secondary user learns how to select channels according to its experience. Multi-agent reinforcement leaning (MARL) is applied in the framework of $Q$-learning by considering other secondary users as a part of the environment. A rigorous proof of the convergence of $Q$-learning is provided via the similarity between the $Q$-learning and Robinson-Monro algorithm, as well as the analysis of the corresponding ordinary differential equation (via Lyapunov function). The performance of learning (speed and gain in utility) is evaluated by numerical simulations. <s> BIB001 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> B. COST <s> In this paper, we study game dynamics and learning schemes for heterogeneous 4G networks. We introduce a novel learning scheme called cost-to-learn that incorporates the cost to switch, the switching delay, and the cost of changing to a new action and, captures the realistic behavior of the users that we have experimented on OPNET simulations. Considering a dynamic and uncertain environment where the users and operators have only a numerical value of their own payoffs as information, we construct various heterogeneous combined fully distributed payoff and strategy reinforcement learning (CODIPAS-RL): the users try to learn their own optimal payoff and their optimal strategy simultaneously. We establish the asymptotic pseudo-trajectories as solution of differential equations. Using evolutionary game dynamics, we prove the convergence and stability properties in specific classes of dynamic robust games. We provide various numerical examples and OPNET simulations in the context network selection in wireless local area networks (WLAN) and Long Term Evolution (LTE). <s> BIB002 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> B. COST <s> This paper studies the tradeoff between throughput and multichannel diversity in multichannel opportunistic spectrum access (OSA) systems. We explicitly consider channel condition as well as the activities of the primary users. We assume that the primary users use the licensed channel in a slotted fashion and the secondary users can only explore one licensed channel at a time. The secondary users then sequentially explore multiple channels to find the best channel for transmission. However, channel exploration is time-consumed, which decreases effective transmission time in a slot. For single secondary user OSA systems, we formulate the channel exploration problem as an optimal stopping problem with recall, and propose a myopic but optimal approach. For multiple secondary user OSA systems, we propose an adaptive stochastic recall algorithm (ASRA) to capture the collision among multiple secondary users. It is shown that the proposed solutions in this paper achieve increased throughput both the scenario of both single secondary user as well as multiple secondary suers. <s> BIB003 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> B. COST <s> Opportunistic spectrum access (OSA) has been regarded as the most promising approach to solve the paradox between spectrum scarcity and waste. Intelligent decision making is key to OSA and differentiates it from previous wireless technologies. In this article, a survey of decision-theoretic solutions for channel selection and access strategies for OSA system is presented. We analyze the challenges facing OSA systems globally, which mainly include interactions among multiple users, dynamic spectrum opportunity, tradeoff between sequential sensing cost and expected reward, and tradeoff between exploitation and exploration in the absence of prior statistical information. We provide comprehensive review and comparison of each kind of existing decision-theoretic solution, i.e., game models, Markovian decision process, optimal stopping problem and multi-armed bandit problem. We analyze their strengths and limitations and outline further research for both technical contents and methodologies. In particular, these solutions are critically analyzed in terms of information, cost and convergence speed, which are key concerns for practical implementation. Moreover, it is noted that each kind of existing decision-theoretic solution mainly addresses one aspect of the challenges, which implies that two or more kinds of decision-theoretic solutions should be incorporated to address more challenges simultaneously. <s> BIB004
|
Performance is also an important aspect in CIoT learning decision theoretic models. The strategy will take long to update the player information based on history action payoff to explore all the possible selections. The performance in these models is measured according to the cost of resource utilization and action switching. The literature showed that the information exchange which involves the resource utilization causes extra overhead on network resources like power, updating, discovery, time etc. to name a few BIB001 . Moreover, it is also affected by the action switching in the network which involves the hardware re-synchronization and re-configuration according to the newly updated chosen action or strategy BIB002 . The solution for reducing the resource and network utilization is to design such uncoupled algorithms which are dependent on local information and are capable of combining the decision theoretic games model with MDP, MAB and OSP for multi-user system. The literature indicates that uptill now, only the solution for coupled algorithms is reported, which is useless for multiagent system BIB004 . In this paper we proposed the optimal solution for decision making in CIoT which is a combination of multiple decision theoretic models i.e game theory, MAB, MDP and OSP for achieving better performance in multi-user system. The action switching cost can be reduced by including the action cost in the optimization objectives. This can be included explicitly into the problem formation for optimal stopping problem which considers the overhead of resource selection and discovery BIB003 . Moreover, other decision theoretic solutions including game model, MDP and MAB problem will automatically converge to optimal solution by the trial pay-off history of the players (objects) as presented in Table 2 . This behavior leads to enormous switching cost as it considers the history of players repeatedly BIB002 . Therefore, only optimal stopping theory provides better performance and less cost as compared to other three decision theoretic models.
|
A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> C. CONVERGENCE SPEED <s> Resource allocation is an important issue in cognitive radio systems. It can be done by carrying out negotiation among secondary users. However, significant overhead may be incurred by the negotiation since the negotiation needs to be done frequently due to the rapid change of primary users' activity. In this paper, an Aloha-like spectrum access scheme without negotiation is considered for multi-user and multi-channel cognitive radio systems. To avoid collision incurred by the lack of coordination, each secondary user learns how to select channels according to its experience. Multi-agent reinforcement leaning (MARL) is applied in the framework of $Q$-learning by considering other secondary users as a part of the environment. A rigorous proof of the convergence of $Q$-learning is provided via the similarity between the $Q$-learning and Robinson-Monro algorithm, as well as the analysis of the corresponding ordinary differential equation (via Lyapunov function). The performance of learning (speed and gain in utility) is evaluated by numerical simulations. <s> BIB001 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> C. CONVERGENCE SPEED <s> Opportunistic spectrum access (OSA) has been regarded as the most promising approach to solve the paradox between spectrum scarcity and waste. Intelligent decision making is key to OSA and differentiates it from previous wireless technologies. In this article, a survey of decision-theoretic solutions for channel selection and access strategies for OSA system is presented. We analyze the challenges facing OSA systems globally, which mainly include interactions among multiple users, dynamic spectrum opportunity, tradeoff between sequential sensing cost and expected reward, and tradeoff between exploitation and exploration in the absence of prior statistical information. We provide comprehensive review and comparison of each kind of existing decision-theoretic solution, i.e., game models, Markovian decision process, optimal stopping problem and multi-armed bandit problem. We analyze their strengths and limitations and outline further research for both technical contents and methodologies. In particular, these solutions are critically analyzed in terms of information, cost and convergence speed, which are key concerns for practical implementation. Moreover, it is noted that each kind of existing decision-theoretic solution mainly addresses one aspect of the challenges, which implies that two or more kinds of decision-theoretic solutions should be incorporated to address more challenges simultaneously. <s> BIB002
|
A better learning technique should be designed to increase the convergence speed and minimizing the cost. The convergence speed is the most important aspect that should be considered in designing an efficient learning algorithm as it reduces the information overhead and cost by adapting the environment. These models only consider the convergence property and do not achieve the convergence speed which is important in the development of practical system. Moreover, the background study shows that they will achieve convergence only when the iteration number increases at large amount BIB002 , BIB001 . Therefore, they create a large overhead due to asymptotical convergence and cost which is not suitable for practical system implementation. Finally, the optimal stopping problem provides the decision solution for just one-shot, which means that it is unable to provide any solution for convergence speed.
|
A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> A. OPEN ISSUES AND CHALLENGES <s> We investigate the problem of achieving global optimization for distributed channel selections in cognitive radio networks (CRNs), using game theoretic solutions. To cope with the lack of centralized control and local influences, we propose two special cases of local interaction game to study this problem. The first is local altruistic game, in which each user considers the payoffs of itself as well as its neighbors rather than considering itself only. The second is local congestion game, in which each user minimizes the number of competing neighbors. It is shown that with the proposed games, global optimization is achieved with local information. Specifically, the local altruistic game maximizes the network throughput and the local congestion game minimizes the network collision level. Also, the concurrent spatial adaptive play (C-SAP), which is an extension of the existing spatial adaptive play (SAP), is proposed to achieve the global optimum both autonomously as well as rapidly. <s> BIB001 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> A. OPEN ISSUES AND CHALLENGES <s> Opportunistic spectrum access (OSA) has been regarded as the most promising approach to solve the paradox between spectrum scarcity and waste. Intelligent decision making is key to OSA and differentiates it from previous wireless technologies. In this article, a survey of decision-theoretic solutions for channel selection and access strategies for OSA system is presented. We analyze the challenges facing OSA systems globally, which mainly include interactions among multiple users, dynamic spectrum opportunity, tradeoff between sequential sensing cost and expected reward, and tradeoff between exploitation and exploration in the absence of prior statistical information. We provide comprehensive review and comparison of each kind of existing decision-theoretic solution, i.e., game models, Markovian decision process, optimal stopping problem and multi-armed bandit problem. We analyze their strengths and limitations and outline further research for both technical contents and methodologies. In particular, these solutions are critically analyzed in terms of information, cost and convergence speed, which are key concerns for practical implementation. Moreover, it is noted that each kind of existing decision-theoretic solution mainly addresses one aspect of the challenges, which implies that two or more kinds of decision-theoretic solutions should be incorporated to address more challenges simultaneously. <s> BIB002 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> A. OPEN ISSUES AND CHALLENGES <s> In this article, we investigate self-organizing optimization for cognitive small cells (CSCs), which have the ability to sense the environment, learn from historical information, make intelligent decisions, and adjust their operational parameters. By exploring the inherent features, some fundamental challenges for self-organizing optimization in CSCs are presented and discussed. Specifically, the dense and random deployment of CSCs brings about some new challenges in terms of scalability and adaptation; furthermore, the uncertain, dynamic, and incomplete information constraints also impose some new challenges in terms of convergence and robustness. For providing better service to users and improving resource utilization, four requirements for self-organizing optimization in CSCs are presented and discussed. Following the attractive fact that the decisions in game-theoretic models are exactly coincident with those in self-organizing optimization (i.e., distributed and autonomous), we establish a framework of gametheoretic solutions for self-organizing optimization in CSCs and propose some featured game models. Specifically, their basic models are presented, some examples are discussed, and future research directions are given. <s> BIB003
|
The future of emerging CIoT technology is quite exciting and bright although it is in developmental stage and needs a lot of research in this field to achieve practical solutions. There are still many challenges and issues which are presented in this section as follows: • The current solution for CIoT has imperfect sensing and impact on decision result. The channel sensing and selection of CIoT is still explored separately in CIoT domain. Here, the core objective is to achieve joint execution of channel sensing and selection in federated manner. • An important feature is to incorporate the user demands in the metric formulation for intelligent decision making. The current solution for decision making is only considering the optimization of allocated resources and is unable to highlight the user demand. It is of utmost importance to consider the user demands for specific object and form an optimization matrix for intelligent decision making. • The intensive research on the learning algorithms showed that they can bring their strategies up-to-date on the basis of their action pay-off information. This strategy takes a long time to explore all the actions of the players and converge to a stable solution. The potential outcome is to design the knowledge based upon learning technologies which increases the convergence speed for better performance BIB003 . • The current game-theoretic models are unable to handle the large scale CIoT network due to lack of knowledge and self-organization. The requirement is to formulate a new game theoretic model which can handle the social behaviors and manage self-organization. It is necessary to deploy a bio-inspired system which consumes a localized selfless game while each player maximizing utilities and collecting utilities of its neighbors was proposed to achieve global optimization via local information exchange BIB001 . The prospective outcome here is to design a model based on social behaviors to address the most challenging issue of self-organization optimization.In large-scale CIoT, the globalized information interchange between the players is not appropriate. Infact the players have to rely on the local information which can be extracted by utilizing the game model. The need of the hour is to propose a scheme in which the game players are altruistic and share all the information with their neighbors to achieve optimization and self-organizaion. BIB002 . • It is desirable to achieve global optimization using local coordination and interaction games for the objects with dynamically unpredictable and incomplete information constraints resolution for cognitive decision making. • Last but not the least, this new emerging field is under developmental phase and the need of the hour is to conduct an extensive research and apply it to a practical system. A lot of efforts are required from academia and industry to develop the application and practical system of CIoT for different scenarios ranging from smart home to smart city.
|
A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> 1) GAME MODELS WITH UNCOUPLED ALGORITHMS <s> This paper investigates price-based resource allocation strategies for two-tier femtocell networks, in which a central macrocell is underlaid with distributed femtocells, all operating over the same frequency band. Assuming that the macrocell base station (MBS) protects itself by pricing the interference from femtocell users, a Stackelberg game is formulated to study the joint utility maximization of the macrocell and femtocells subject to a maximum tolerable interference power constraint at the MBS. Two practical femtocell network models are investigated: sparsely deployed scenario for rural areas and densely deployed scenario for urban areas. For each scenario, two pricing schemes: uniform pricing and non-uniform pricing, are proposed. The Stackelberg equilibriums for the proposed games are characterized, and an effective distributed interference price bargaining algorithm with guaranteed convergence is proposed for the uniform-pricing case. Numerical examples are presented to verify the proposed studies. It is shown that the proposed schemes are effective in resource allocation and macrocell protection for both the uplink and downlink transmissions in spectrum-sharing femtocell networks. <s> BIB001 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> 1) GAME MODELS WITH UNCOUPLED ALGORITHMS <s> We consider a cognitive radio system with one primary (licensed) user and multiple secondary (unlicensed) users. Given the interference temperature constraint, the secondary users compete for the available spectrum to fulfill their own communication need. Borrowing the concept of price from market theory, we develop a decentralized Stackelberg game formulation for power allocation. In this scheme, the primary user (leader) announces prices for the available tones such that a system utility is maximized. Using the announced prices, secondary users (followers) compete for the available bandwidth to maximize their own utilities. We show that this Stackelberg game is polynomial time solvable under certain channel conditions. When the individual power constraints of secondary users are inactive (due to strict interference temperature constraint), the proposed distributed power control method is decomposable across the tones and unlike normal water-filling it respects the interference temperature constraints of the primary user. When individual power constraints are active, we propose a distributed approach that solves the problem under an aggregate interference temperature constraint. Moreover, we propose a dual decomposition based power control method and show that it solves the Stackelberg game asymptotically when the number of tones becomes large. <s> BIB002 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> 1) GAME MODELS WITH UNCOUPLED ALGORITHMS <s> Opportunistic spectrum access (OSA) has been regarded as the most promising approach to solve the paradox between spectrum scarcity and waste. Intelligent decision making is key to OSA and differentiates it from previous wireless technologies. In this article, a survey of decision-theoretic solutions for channel selection and access strategies for OSA system is presented. We analyze the challenges facing OSA systems globally, which mainly include interactions among multiple users, dynamic spectrum opportunity, tradeoff between sequential sensing cost and expected reward, and tradeoff between exploitation and exploration in the absence of prior statistical information. We provide comprehensive review and comparison of each kind of existing decision-theoretic solution, i.e., game models, Markovian decision process, optimal stopping problem and multi-armed bandit problem. We analyze their strengths and limitations and outline further research for both technical contents and methodologies. In particular, these solutions are critically analyzed in terms of information, cost and convergence speed, which are key concerns for practical implementation. Moreover, it is noted that each kind of existing decision-theoretic solution mainly addresses one aspect of the challenges, which implies that two or more kinds of decision-theoretic solutions should be incorporated to address more challenges simultaneously. <s> BIB003 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> 1) GAME MODELS WITH UNCOUPLED ALGORITHMS <s> In this article, we investigate self-organizing optimization for cognitive small cells (CSCs), which have the ability to sense the environment, learn from historical information, make intelligent decisions, and adjust their operational parameters. By exploring the inherent features, some fundamental challenges for self-organizing optimization in CSCs are presented and discussed. Specifically, the dense and random deployment of CSCs brings about some new challenges in terms of scalability and adaptation; furthermore, the uncertain, dynamic, and incomplete information constraints also impose some new challenges in terms of convergence and robustness. For providing better service to users and improving resource utilization, four requirements for self-organizing optimization in CSCs are presented and discussed. Following the attractive fact that the decisions in game-theoretic models are exactly coincident with those in self-organizing optimization (i.e., distributed and autonomous), we establish a framework of gametheoretic solutions for self-organizing optimization in CSCs and propose some featured game models. Specifically, their basic models are presented, some examples are discussed, and future research directions are given. <s> BIB004
|
The current game models usually monitor the environment and statistical information of other players (objects). The cognitive property of CIoT allows the objects perceiving action cycle which includes the objects' information, environment's information and heterogeneous objects sensing etc. to name a few as mentioned in above sections. In this framework, the observation on multi-user object and channel selection is done through perceptive action cycle, extract useful information from partial feedback and then fine-tune their behaviors as well as policies towards some desirable solution using stochastic or repeated games. The hierarchical nature of CIoT framework requires the new uncoupled learning algorithms like Stackelberg game BIB001 , BIB002 as discussed in section IIIe. The Hierarchical game models are the most appropriate game model for CIoT because of their hierarchical nature and require the hierarchical game based algorithms for objects interlinking and management. In previous section, we have mentioned that the network resource utilization overhead and traditional game theory algorithms are coupled and designed for single user's scenario. Hence, it is necessary to design the update rule carefully that contains neighbor information which guarantees the convergence towards the desirable solution BIB004 . Moreover, it is also desirable to propose uncoupled algorithms which utilize the local partial action pay-off information of the neighboring players and objects for the desirable solution. One of the best solution currently available in previous literature for CIoT is the Stackelberg game model based on uncoupled algorithm which utilizes the action payoff of neighboring objects for the desirable solution. In this game model, the leader takes an action based on a situation and the followers take actions according to the leader or follow its actions. In this game model, both leader and followers maximize their utility function or leader has no utility and he (resource user) aims to maximize the accumulated utility of the followers which results into efficiency of NE significantly. On the other hand, when this local information is not available or is unknown priori, we need to consider the individual action pay-off information for the desirable solution. The action of one player is affected by the action of other player in uncoupled algorithms which makes it more difficult for the game models to achieve optimality and convergence. The take-away of this model is to carefully design the utility function and fully couple an algorithms like learning automata to achieve self-organization and optimization for practical implementation BIB003 , BIB004 . Finally the cost of information exchange among the users as shown in Table 2 is quite high for the traditional coupled algorithms. Therefore, it is the need of the hour to design the game models with uncoupled algorithms which can provide optimality and convergence of objects and channel selection for intelligent decision making in CIoT.
|
A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> 2) NON REACTIVE MARKOVIAN GAME <s> In cognitive radio networks (CRNs), effective and efficient channel exploitation is imperative for unlicensed secondary users to seize available network resources and improve resource utilization. In this paper, we propose a simple channel sensing order for secondary users in multi-channel CRNs without a priori knowledge of primary user activities. By sensing the channels according to the descending order of their achievable rates with optimal stopping, we show that the proposed channel exploitation approach is efficient yet effective in elevating throughput and resource utilization. Simulation results show that our proposed channel exploitation approach outperforms its counterparts by up to 18% in a single-secondary user pair scenario. In addition, we investigate the probability of packet transmission collision in a multi-secondary user pair scenario, and show that the probability of collision decreases as the number of channels increases and/or the number of secondary user pairs decreases. It is observed that the total throughput and resource utilization increase with the number of secondary user pairs due to increased transmission opportunities and multi-user diversity. Our results also demonstrate that resource utilization can be further improved via the proposed channel exploitation approach when the number of secondary user pairs approaches the number of channels. <s> BIB001
|
The cognitive decision making can be achieved by combining the game theory with Markovian decision process. This amazing approach provides the distributed solution of optimal learning and provisioning among the multiple objects for appropriate object and channel selection efficiently. Basically, this approach involves multiple players, competing for resources, and relies on the current state of the player and does not require the information about the neighboring players in Markovian environment. In this model, the current state and actions of the game players jointly determines the system state in classical Markovian environment BIB001 . In BIB001 , Cheng and Zhuang have presented a stochastic approximation algorithm that can adaptively estimate the NE policies and track such policies for non-stationary problems where the statistics of the channel and user parameters evolve with time. Basically, the system state changes are totally dependent on the current actions chosen by the players and hence it is termed as reactive Markovian game. In case of CIoT, the game involved in the system state is non-reactive in Markovian environment. In this scenario the spectrum utilization and occupancy state of CIoT is totally dependent on the objects or players instead of their actions. The previous literature presented some solutions for classical reactive Markovian game models which include stochastic approximation and value iteration as discussed in section III b. need to be modified concisely in case of CIoT. Therefore, such drawback can be solved by developing effective solutions via non-reactive Markovian game for cognitive decision making in IoT. Moreover, there are multi-agents and objects in CIoT which are distributed across different domains. It is desirable that the solutions must be non-reactive for multiple domains in which the information about other users is not desirable to perform intelligent decision making. The consideration here is to implement the reinforced learning algorithms or stochastic learning automata to practically converge to NE and CE of game for desirable solution in rapidly changing CIoT environment.
|
A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> 3) MULTI-USER OPTIMAL STOPPING PROBLEM <s> The problem of distributed learning and channel access is considered in a cognitive network with multiple secondary users. The availability statistics of the channels are initially unknown to the secondary users and are estimated using sensing decisions. There is no explicit information exchange or prior agreement among the secondary users and sensing and access decisions are undertaken by them in a completely distributed manner. We propose policies for distributed learning and access which achieve order-optimal cognitive system throughput (number of successful secondary transmissions) under self play, i.e., when implemented at all the secondary users. Equivalently, our policies minimize the sum regret in distributed learning and access, which is the loss in secondary throughput due to learning and distributed access. For the scenario when the number of secondary users is known to the policy, we prove that the total regret is logarithmic in the number of transmission slots. This policy achieves order-optimal regret based on a logarithmic lower bound for regret under any uniformly-good learning and access policy. We then consider the case when the number of secondary users is fixed but unknown, and is estimated at each user through feedback. We propose a policy whose sum regret grows only slightly faster than logarithmic in the number of transmission slots. <s> BIB001 </s> A Survey of Decision-Theoretic Models for Cognitive Internet of Things (CIoT) <s> 3) MULTI-USER OPTIMAL STOPPING PROBLEM <s> This paper studies the tradeoff between throughput and multichannel diversity in multichannel opportunistic spectrum access (OSA) systems. We explicitly consider channel condition as well as the activities of the primary users. We assume that the primary users use the licensed channel in a slotted fashion and the secondary users can only explore one licensed channel at a time. The secondary users then sequentially explore multiple channels to find the best channel for transmission. However, channel exploration is time-consumed, which decreases effective transmission time in a slot. For single secondary user OSA systems, we formulate the channel exploration problem as an optimal stopping problem with recall, and propose a myopic but optimal approach. For multiple secondary user OSA systems, we propose an adaptive stochastic recall algorithm (ASRA) to capture the collision among multiple secondary users. It is shown that the proposed solutions in this paper achieve increased throughput both the scenario of both single secondary user as well as multiple secondary suers. <s> BIB002
|
The previous section provides deep insight knowledge about optimal stopping problem showing clearly that it is very efficient for resource selection and discovery of objects to be single user rather than multi agent systems. The CIoT on other hand involves multi-agent cooperation and selection. Therefore, the only problem is the interaction among multi agents for better resource selection and discovery of objects. Such problem can be solved by hybrid model which merges optimal stopping problem with game theory for the desirable multi-agent systems. The literature has shown that there is little reported existing work in BIB002 and BIB001 , considered the multiple agents worth numeric simulation. Still the multiagent optimal stopping problems have not been reported. The outcome is to develop the practical solution for multi-user optimal stopping problem for intelligent decision making in CIoT. The most effective solution presented in previous literature which can be utilized with minor changes is the implementation of stochastic recall algorithm which will intelligently recall the previous states and statistical information of the player or objects optimally as discussed in section IIIc. This adaptive stochastic recall algorithm (ASRA) can also capture the collision among multiple secondary users. The combination of game theory with ASRA will enable the decision maker to re-utilize the previous information about players states, their statistical information, channel utilization, objects related to their action etc. to name a few for intelligent decision making.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.