file_path
stringlengths 45
45
| text
stringlengths 1.02k
568k
| publisher
stringclasses 1
value | title
stringlengths 7
235
| authors
stringlengths 5
404
⌀ | url
stringlengths 29
65
| year
int64 1.94k
2.03k
| license
stringclasses 1
value |
|---|---|---|---|---|---|---|---|
isprs/89fdb922_35e5_4fbb_bf46_a0d4b86d5e29.md
|
# Impact of Geolocation Data on Augmented Reality Usability:
A Comparative User Test
[PERSON]
[PERSON]
1 Media Engineering Institute (MEI), School of Engineering and Management Vaud, HES-SO, Yverdon-les-Bains, Switzerland - (julien.mercier, nicolas.chabloz, gregory.dozot, olivier.ertz, daniel.rappo)@heig-vd.ch
[PERSON]
1 Media Engineering Institute (MEI), School of Engineering and Management Vaud, HES-SO, Yverdon-les-Bains, Switzerland - (julien.mercier, nicolas.chabloz, gregory.dozot, olivier.ertz, daniel.rappo)@heig-vd.ch
[PERSON]
3 University of Teacher Education, HES-SO, Lausanne, Switzerland - [PERSON]
[PERSON]
1 Media Engineering Institute (MEI), School of Engineering and Management Vaud, HES-SO, Yverdon-les-Bains, Switzerland - (julien.mercier, nicolas.chabloz, gregory.dozot, olivier.ertz, daniel.rappo)@heig-vd.ch
[PERSON]
2 Lab-STICC, UMR 6285, CNRS, Universite Bretagne Sud, Vannes, France - [EMAIL_ADDRESS], [PERSON]SS]
[PERSON]
1 Media Engineering Institute (MEI), School of Engineering and Management Vaud, HES-SO, Yverdon-les-Bains, Switzerland - (julien.mercier, nicolas.chabloz, gregory.dozot, olivier.ertz, daniel.rappo)@heig-vd.ch
###### Abstract
While the use of location-based augmented reality (AR) for education has demonstrated benefits on participants' motivation, engagement, and on their physical activity, geolocation data inaccuracy causes augmented objects to jitter or drift, which is a factor in downgrading user experience. We developed a free and open source web AR application and conducted a comparative user test (n = 54) in order to assess the impact of geolocation data on usability, exploration, and focus. A control group explored biodiversity in nature using the system in combination with embedded GNSS data, and an experimental group used an external module for RTK data. During the test, eye tracking data, geolocated traces, and in-app user-triggered events were recorded. Participants answered usability questionnaires (SUS, UEQ, HARUS). We found that the geolocation data the RTK group was exposed to was less accurate in average than that of the control group. The RTK group reported lower usability scores on all scales, of which 5 out of 9 were significant, indicating that inaccurate data negatively predicts usability. The GNSS group walked more than the RTK group, indicating a partial effect on exploration. We found no significant effect on interaction time with the screen, indicating no specific relation between data accuracy and focus. While RTK data did not allow us to better the usability of location-based AR interfaces, results allow us to assess our system's overall usability as excellent, and to define optimal operating conditions for future use with pupils.
## 1 Introduction
This study is part of the ongoing _BiodivAR_ project, which attempts to assess the potential benefits of using augmented reality (AR) for outdoor education on biodiversity. In AR interfaces, digital objects can be overlaid on top of users' field of view in real-time, through the screen of a mobile device or a head-mounted display. When used sensibly in an educational setting, it may convey the impression of an enriched environment and make the material more attractive, thus motivating students to learn ([PERSON], 2020, [PERSON] et al., 2022). The most reported positive effects of AR in education are learning gains and motivation ([PERSON] et al., 2014). Our research is focused on the use of _location-based_ AR in particular, where the position of augmented objects is computed based on their geographic coordinates relative to the user's location as estimated by the mobile device's GNSS. With this technology, augmented objects can be built remotely from any given geodata, as opposed to marker-based AR which requires physical markers to be physically placed on target locations. Location-based AR specially promotes learning in context ([PERSON] et al., 2021, [PERSON] et al., 2014), ecological engagement ([PERSON] et al., 2010), and causes users to experience a positive interdependence with nature ([PERSON] et al., 2011), which fosters improved immersion and learning. Last but not least, location-based AR shows positive effects on the physical activity of users across genders, ages, weight status, and prior activity levels ([PERSON] et al., 2017). However, location-based AR requires steady and continuously accurate data to operate. While GNSS technology has evolved and improved in the past decades, it has been more of an evolution than a revolution. Usability issues have been reported by a number of studies ([PERSON] et al., 2014, [PERSON] et al., 2009, [PERSON] and [PERSON], 2013, [PERSON] et al., 2011, [PERSON] et al., 2012), most of which blame the inaccuracy of mobile devices' embedded GNSS sensors. Some studies considered that these recurring problems made AR distracting and frustrating and eventually favored marker-based AR, which is more advanced and offers better user experience ([PERSON] and [PERSON], 2013, [PERSON] et al., 2018).
## 2 Background
A first proof-of-concept was developed in 2017, featuring a series of geolocated points of interest (POIs) on biodiversity. A test with ten-year-old pupils confirmed the relevance of using AR to support educational field trips ([PERSON] et al., 2018) while also revealed usability challenges:
1. The system should allow non-expert users to create AR experiences ([PERSON] et al., 2015)
2. Users should be able to publish observations rather than being restricted to a passive viewing role;
3. The instability of augmented objects deteriorates usability. Participants spent 88.5 % of the time looking at the tablet rather than with the surrounding nature. This imbalance could be in part related to inaccurate geolocation data: participants were observed spending considerable time reorienting themselves ([PERSON] et al., 2018).
In order to address these identified issues, we developed _BiodivAR1_, a free and open source (GNU GPLv3.0) web application using a user-centered design process ([PERSON] et al., 2023).
It was built using the web framework A-Frame2, for which we also created a custom library3 for the creation of WebXR location-based objects in A-Frame. We used the Leaf4 library for the interactive maps. _BiodivAR_ enables the creation and visualization of geolocated POIs in AR (see Figure 1) as well as a cartographic authoring tool for the collaborative management of AR environments (see Figure 2). They can be shared publicly with or without editing privileges. The application allows anyone without technological know-how to create AR environments by importing/exporting geospatial data and styling POIs by attaching medias to them. Medias can be location-triggered (visible/audible) according to various distance thresholds set by the author.
Footnote 3: [[https://www.ardusimple.com/product/rk-handheld-surveyor-kit/](https://www.ardusimple.com/product/rk-handheld-surveyor-kit/)]([https://www.ardusimple.com/product/rk-handheld-surveyor-kit/](https://www.ardusimple.com/product/rk-handheld-surveyor-kit/))
## 3 Research Goals
The purpose of our research _overall_ is to assess the potential benefits of using this application in the context of biodiversity education. Before introducing the tool to pupils, it seemed important to ensure its usability. This comparative user test will allow us to define and guarantee the best possible conditions of use for a younger audience. The goals of this study can be synthesized as follows:
1. Assess the overall usability of the AR application.
2. Assess the impact of geolocation data accuracy on usability, exploration, and focus.
3. Gather user feedback for future improvements4. Footnote 4: The qualitative feedbacks were not included in this paper, as we extensively focused on the quantitative data and group comparison.
The literature review and the observations made during the first iteration led us to propose the following hypothesis: Inaccurate geolocation data negatively affects usability. Additionally, we are looking to investigate the impact that geolocation data accuracy may have on exploration and focus in location-based AR, about which we have not been able to find any literature. The resulting research questions are:
Q1: Does geolocation data accuracy predict usability scores? Q2: Is geolocation data accuracy related to exploration? Q3: Is geolocation data accuracy related to focus?
## 4 Materials and Methods
### Experimental design
The present study aims to measure and compare the usability of a location-based AR application used in combination with different geolocation data sources. Using our authoring tool, we created an AR environment with POIs on biodiversity in the surroundings of the School of Engineering and Management Vaud in Yverdon-les-Bains (Switzerland). After a brief introduction to the tool, all participants freely explored the AR environment for 15 minutes using a Samsung Galaxy Tab Active3 tablet with a SIM card for cellular data. As shown in Figure 3, the comparative user test (n = 54) includes in two groups:
**GNSS**: the control group received geolocation data coming from the GNSS sensor embedded in the mobile device
**RTK**: the experimental group received geolocation data coming from an external Archusimple RTK kit5.
Footnote 5: The qualitative feedbacks were not included in this paper, as we extensively focused on the quantitative data and group comparison.
Footnote 6: Exploration is represented by the distance walked, the number of POIs visited, and the number of times the 2D map was opened.
### Participants
The sample includes 54 participants (\(\upmodels\) 21, \(\sigma\) = 33), with a mean age of M = 25.72 (SD = 4.80). They are students and collaborators of the School of Engineering and Management Vaud, and they each signed an informed consent form for the use of the data collected. Login credentials (identifier + password)
Figure 1: _BiodivAR_’s AR interface: a) view of two POIs from a distance; b) the 2D map is opened in split view; c) after entering the radius of a POI, contextual data on the adjacent plant specimen is triggered. [[https://biodivar.heig-vd.ch/](https://biodivar.heig-vd.ch/)]([https://biodivar.heig-vd.ch/](https://biodivar.heig-vd.ch/))
Figure 3: Experimental design of the comparative user test.
Figure 2: Experimental design of the comparative user test.
were created for each participant to record their data separately and facilitate comparison. Among them, 47 agreed to wear eye-tracking glasses, of which 41 successfully recorded data. They were randomly assigned to each group. The control group's (GNSS) mean age is M = 27.5 (SD = 6.09), and it includes 12 \(\upsigma\) and 15 \(\upsigma\). The experimental group's (RTK) mean age is M = 24.2 (SD = 2.22) and it includes 9 \(\upsigma\) and 18 \(\upsigma\). The first participant eventually had to be excluded from the final results because they experienced numerous crashes due to a bug that was fixed for the subsequent participants. The treatment they received was therefore too different to compare.
### Data collection and processing
The four main concepts our study seeks to connect are \"location data accuracy\", \"usability\", \"exploration\", and \"focus\". The measurable observations we chose to represent those concepts are listed in Table 1. In our experiment, the two groups (or treatments) operationalize the concept of \"geolocation data accuracy\". This concept is represented by two variables: _accuracy_ and _continuity_. The accuracy attribute is provided by the Geolocation API along with the horizontal location data as latitude and longitude9. It denotes the accuracy level of the latitude and longitude coordinates in meters. We use the average accuracy participants were exposed to while in AR mode as the indicator for accuracy. However, in the specific context of location-based AR, sudden changes in data accuracy heavily impact the display of augmented objects in the interface. An indicator for continuity in the data is thus the amount of outliers-i.e. the points that are visibly out of a user's trajectory (as shown in Figure 4). An additional indicator for continuity in the data is the standard deviation of the data accuracy the participants of each group was exposed to. As far as the concept of \"usability\" goes, it is represented by a series of nine variables whose indicators are the different scales of the three questionnaires (SUS, HARUS, UEQ): _overall usability_, _case of handling_, _case of understanding_, _attractability_, _user-friendliness_, _efficiency_, _dependability_, _motivation_, _innovativeness_. The concept of \"exploration\" is represented by three variables: _quantity_, _diversity_, and _ease_. The distance walked is the indicator of the quantity of exploration. The amount of POIs visited is the indicator of the diversity of exploration. An important use of the 2D map may indicate that participants required assistance in navigating. The amount of times the 2D map was opened is thus the indicator of the ease users had exploring. Finally, the concept of \"focus\" in our study is represented by a _screen interaction_ variable, whose indicator is the amount of time participants spent interacting with the tablet screen _versus_ with the real world.
Footnote 9: [[https://w3c.github.io/geolocation-api](https://w3c.github.io/geolocation-api)]([https://w3c.github.io/geolocation-api](https://w3c.github.io/geolocation-api))
#### 4.3.1 Geolocation data accuracy
During the test, participants' geographical coordinates were logged at 1 Hz. Each log also contains an attribute for location accuracy, user ID and a timestamp. The resulting users' trajectories can be visualized in the application (see Figure 4) and downloaded as GeoJSON files for further analysis. The color of the trajectory changes when the AR session is stopped and resumed again. We downloaded the data and calculated the mean location accuracy each participant was exposed to. As shown in Figure 4, the trajectories-in particular that of the RTK group-contained outliers, which were removed manually using the free and open source software QGIS to get a more accurate estimate of the actual distance travelled (as an indicator of our \"exploration quantity\" variable, see 4.3.3). By calculating the different amount of points before and after this manual processing, the outliers were summed for each participant. Once the data was cleaned, we calculated the total distance walked by each participant. Because there were variations in the duration of each participant's test (min = 9'14, max = 24'11 s), the data was normalized for a duration of 15 minutes. This allowed us to calculate:
1. The average geolocation data accuracy
2. The amount of outliers in the data
3. The standard deviation of the geolocation data accuracy
#### 4.3.2 Usability
Immediately after the test, participants answered an online survey containing demographic questions (age, gender), an open question for qualitative feedback, and three usability questionnaires:
* SUS (System Usability Scale) is a generic, technology-independent 10 item questionnaire on a 5 point Likert scale, frequently used for generic evaluation of a system ([PERSON], 1996). The [PERSON]'s alpha of the SUS questionnaire is 0.79, showing an appropriate internal consistency. In accordance with the instructions of the scale's authors, the SUS score is calculated as follows: 1 point was subtracted from the odd-numbered (phrased positively) items' scores. We subtracted the even-numbered (phrased negatively) items score to 5. The processed scores were added together and then multiplied by 2.5 to get an individual user's score on a scale of 100. While a comparison between two scores is self-explanatory, we used an adjective scale ([PERSON], 2009) to qualify the results individually.
\begin{table}
\begin{tabular}{|p{113.8 pt}|p{113.8 pt}|} \hline
**Concept** & **Variable** & **Indicator** \\ \hline \multirow{3}{*}{Geolocation data accuracy} & Quality & Average geolocation data accuracy \\ \cline{2-3} & & Amount of outliers \\ \cline{2-3} & Continuity & Standard deviation of data accuracy \\ \hline \multirow{6}{*}{Usability} & Overall visibility & SUS score \\ \cline{2-3} & Ease of handling & HARUS (manipublishing) score \\ \cline{2-3} & Ease of understanding & HARUS (comprehensibility) score \\ \cline{2-3} & Attractability & UEQ (attractativeness) score \\ \cline{2-3} & User-friendliness & UEQ (preductivity) score \\ \cline{2-3} & Efficiency & UEQ (efficiency) score \\ \cline{2-3} & Dependability & UEQ (dependability) score \\ \cline{2-3} & Motivation & UEQ (dimultness) score \\ \cline{2-3} & Innovativeness & UEQ (novelly) score \\ \hline \multirow{3}{*}{Exploration} & Quantity & Distance walked \\ \cline{2-3} & Diversity & Amount of POIs visited \\ \cline{1-1} \cline{2-3} & Ease & Amount of times 2D map was opened \\ \cline{1-1} \cline{2-3} & Focus & Screen interaction & Interaction time with tablet screen \\ \hline \end{tabular}
\end{table}
Table 1: Operationalization table.
Figure 4: a) A trajectory from the GNSS group. The short light green line is at an impossible location (on top of a tall building), indicating outliers. b) A trajectory from the RTK group. The star-shaped spikes indicate the presence of many outliers.
* HARUS (Handheld Augmented Reality Usability Scale) is a mobile AR-specific 16 item questionnaire ([PERSON] et al., 2014) on a 7 point Likert scale that focuses on handheld devices and emphasizes perceptual and ergonomic issues. The [PERSON]'s alpha of the HARUS questionnaire is 0.798, showing appropriate internal consistency. It has two components: _manipulability_--the ease of handling the AR system, and _comprehensibility_--the ease to read the information presented on screen. In accordance with the instructions of the scale's authors, the HARUS scores are calculated as follows: We subtracted the odd-numbered (phrased negatively) items score to 7. 1 point was subtracted from the even-numbered (phrased positively) items' scores. The processed scores for items 1 to 8 were added together, divided by 48, and multiplied by 100 to get the individual \"manipulability\" score on a scale of 100. Similarly, the processed scores for items 9 to 16 were added together, divided by 48, and multiplied by 100 to get the individual \"comprehensibility\" score on a scale of 100. HARUS was designed so that its scores are commensurable with SUS scores.
* UEQ (User Experience Questionnaire) is a 26 item questionnaire in the form of semantic differentials: each item is scored on a 7 point scale (from -3 to +3, with 0 as neutral) with two terms with opposite meanings at each extreme (i.e. attractiveness/attractive). It provides a comprehensive measure of user experience ([PERSON] et al., 2008). It includes six scales, covering classical usability aspects such as _efficiency_ (can users solve their tasks without unnecessary effort/), _perspicity_ its it easy to learn how to use the application?, and _dependability_ (does the user feel in control of the interaction?), as well as broader user experience aspects such as _attractiveness_ (do users like the application?), _novelty_ (is the application innovative and creative?), and _stimulation_ (is it exciting and motivating to use the application?). UEQ is typically routinely used to statistically compare two version of a system to check which one has the better user experience. Thus, the UEQ evaluations of both systems or both versions of a system are compared on the basis of the scale means for Each UEQ scale. _Attractiveness_ is calculated by averaging the scores from items 1, 12, 14, 16, 24, and 25. _Perspictity_ is calculated by averaging the scores from items 2, 4, 13, and 21. _Efficiency_ is calculated by averaging the scores from items 9, 20, 22, and 23. _Dependability_ is calculated by averaging the scores from items 8, 11, 17, and 19. _Simulation_ is calculated by averaging the scores from items 5, 6, 7, and 18. Novelty is calculated by averaging the scores from items 3, 10, 15, and 26. Values range between -3 (horribly bad) and +3 (extremely good), but in general only values in a restricted range will be observed. The calculation of means over a panel of participants make it extremely unlikely to observe values above +2 or below -2, as specified in the UEQ handbook ([PERSON], 2015). As per their interpretation, values between -0.8 and 0.8 correspond to a neutral evaluation of the corresponding scale and values greater than 0.8 represent a positive evaluation.
These questionnaires provided scores for the nine scales reported in Table 1 as indicators of our usability variables.
#### 4.3.3 Exploration
During the test, various in-app, user-triggered events were recorded by the application. These included: when the AR session was initiated or exited, when the 2D map was opened or closed, and when the triggering radius of a POI was entered or exited. Each log also contains the coordinates the action took place at, the user ID and a timestamp. The resulting users' action log can be visualized in the application and downloaded as GeoJSON files. Events are represented with red circles on the 2D map (see Figure 4). We downloaded the data and calculated the number of POIs each participant visited as well as how many times they opened the 2D map. These values (POIs visited, 2D map opened) were normalized for a test duration of 15 minutes. This allowed us to calculate:
1. The amount of POIs visited
2. The amount of times the 2D map was opened
The distance walked by each participant was calculated from the geolocation data (see 4.3.1).
#### 4.3.4 Focus
The goal of using eye tracking glasses and data in our study is to determine for how long participants were looking in or out of the tablet screen. 47 out of 54 participants were able-and agreed-to wear eye trackers (Tobi Pro Glasses 3), recording their gaze for the duration of the test. The 7 participants that didn't either choose not to or couldn't because they had prescription glasses. Despite rigorous implementation, 6 recordings did not work as expected and no files were saved. The 41 remaining recordings were imported in Tobii's analysis software. Unfortunately, its tools do not support tracking of moving areas of interest (i.e. the surface of the tablet). We exported the videos with the overlaying gaze point and extracted 10 frames per second, resulting in a dataset of 380K images, an instance of which is shown in Figure 5. We attempted to classify the data with openCV pattern recognition, but the variability prevented from obtaining any results. We resolved to train a deep learning multiclass image classifier model by fine-tuning a pretrained vision transformer (ViT) model with our dataset ([PERSON] et al., 2020). We first had to manually label a random selection of 10K frames with \"in\" or \"out\" labels corresponding to whether the point was or out of the tablet screen (see Figure 5). After training for only one epoch using Google's exploratory and obtaining a satisfying validity of 95%, we inferred the whole dataset which provided a label for every frame10. They were encoded in order to calculate the ratio of time each user spent looking at the tablet screen _versus_ outside of it, at the real world.
Figure 5: Eye tracking data sample. The user’s gaze is located within the tablet screen area.
## 5 Results
### Data analysis
Statistical analysis were made with the free and open platform Jamovi (The Jamovi project, 2022). In the following subsections, we report descriptive statistics (M, SD), and compare our groups (GNSS _versus_ RTK) using an independant Student _t_-test to emphasize to which extent both groups differ on our variables of interest. In cases where the homogeneity of variances assumption is not met, we used a Welch _t_-test, which is more robust11.
Footnote 11: The data is available here: [[https://zenodo.org/record/7845707](https://zenodo.org/record/7845707)]([https://zenodo.org/record/7845707](https://zenodo.org/record/7845707)).
### Geolocation data accuracy
#### 5.2.1 Average geolocation data accuracy
As shown in Figure 6, the mean accuracy for the GNSS group is M = 11.0 (SD = 15.3), and M = 33.6 (SD = 24.8) for the RTK group. The value is in meters, meaning the data the GNSS group was exposed to was accurate within a 11 meters radius, whereas the RTK group got data accurate within a 33.6 meters radius. A Welch _t_-test was used. The results show a significant difference between the two groups ((43.5) = -3.99, p = <.001).
#### 5.2.2 Outliers
As shown in Figure 7, the GNSS group trajectories contained M = 7.2 (SD = 7.55) outliers, and these of the RTK group M = 46.8 (SD = 40.1). A Welch _t_-test was used. The results show a significant difference between the two groups ((27.9) = -5.04, p = <.001).
#### 5.2.3 Standard deviation geolocation data accuracy
As shown in Figure 8, the data participants from the GNSS group were exposed to had a standard deviation of M = 32.0 (SD = 77.7), and that of the RTK group M = 168.3 (SD = 120.1). A Welch _t_-test was used. The results show a significant difference between the two groups (t(44.7) = -4.93, p = <.001).
### Usability
The means of each group for all nine scales from the three usability questionnaires are reported in Table 2 along with _t_-test's p values for significance assessment.
#### 5.3.1 Sus
As shown in Figure 9, the mean SUS score for the GNSS group is M = 81.7 (SD = 9.74). The mean SUS score for the RTK group is M = 74.4 (SD = 12). The results show a significant difference between the two groups (t(51) = 2.45, p = 0.018).
#### 5.3.2 Harus
On the _manipulability_ scale (indicating ease of handling the AR system), the mean score for the GNSS group is M = 76.7 (SD = 13) and that of the RTK group is M = 68.1 (SD = 16.1), as shown in Figure 10. The results show a significant difference between the two groups (t(51) = 2.13, p = 0.038). On the _comprehensibility_ scale (indicating ease of understanding information presented in the AR interface), the mean score for the GNSS group is M = 78.3 (SD = 11.3) whereas the mean score and that of the RTK group is M = 74.9 (SD = 12.9). The results _do not_ show any significant difference between the two groups (t(51) = 1.01, p = 0.318).
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline
**Scale** & **GNSS** & **RTK** & _t_-test** \\ \hline
**SUS** & M = 81.7 & M = 74.4 & t(51) = 2.45, \\ & SD = 97.4 & SD = 12.0 & p = 0.018 \\ \hline
**HAREUS** (manipulability) & M = 76.7 & M = 68.1 & t(52) = 2.13, \\ & SD = 13 & SD = 16.1 & p = 0.038 \\ \hline
**HAREUS** (comprehensibility) & M = 78.3 & M = 74.9 & t(51) = 1.01, \\ & SD = 11.3 & SD = 12.9 & p = 0.318 \\ \hline
**UEQ** (attractiveness) & M = 12.7 & M = 1.1 & t(51) = 2.65, \\ & SD = 0.7 & SD = 0.98 & p = 0.011 \\ \hline
**UEQ** (pepulicity) & M = 20.2 & M = 14.55 & t(46) = 2.61, \\ & SD = 0.64 & SD = 0.92 & p = 0.012 \\ \hline
**UEQ** (efficiency) & M = 1.24 & M = 0.85 & t(51) = 5.88, \\ & SD = 0.85 & SD = 0.94 & p = 0.121 \\ \hline
**UEQ** (dependability) & M = 1.17 & M = 1.02 & t(51) = 0.87, \\ & SD = 0.68 & SD = 0.62 & p = 0.39 \\ \hline
**UEQ** (stimulation) & M = 18.4 & M = 1.31 & t(51) = 1.93, \\ & SD = 0.84 & SD = 11.11 & p = 0.059 \\ \hline
**UEQ** (novelty) & M = 1.8 & M = 1.21 & t(51) = 2.45, \\ & SD = 0.85 & SD = 0.98 & p = 0.018 \\ \hline \end{tabular}
\end{table}
Table 2: Usability results by group and _t_-tests.
Figure 8: Standard deviation geolocation data accuracy by group.
Figure 6: Geolocation data accuracy by group.
Figure 7: Amount of outliers by group.
#### 5.3.3 Ueq
As shown in Figure 11, on the _attractiveness_ scale, the mean score for the GNSS group is M = 1.72 (SD = 0.7) and that of the RTK group is M = 1.1 (SD = 0.98). The results show a significant difference (t(51) = 2.65, p = 0.011). On the _perspectivity_ scale, the mean score for the GNSS group is 2.02 (SD = 0.64) and that of the RTK group is 1.45 (SD = 0.92). A Welch _t-_test was used. The results show a significant difference between the two groups (t(46.7) = 2.61, p = 0.012). On the _efficiency_ scale, the mean score for the GNSS group is 1.24 (SD = 0.85) and that of the RTK group is 0.85 (SD = 0.94). The results _do not_ show any significant difference (t(51) = 1.58, p = 0.121). On the _dependability_ scale, the mean score for the GNSS group is 1.17 (SD = 0.68) and that of the RTK group is 1.02 (SD = 0.62). The results _do not_ show any significant difference (t(51) = 0.87, p = 0.39). On the _stimulation_ scale, the mean score for the GNSS group is 1.84 (SD = 0.84) and that of the RTK group is 1.31 (SD = 1.11). The results _do not_ show any significant difference (t(51) = 1.93, p = 0.059). On the _novelty_ scale, the mean score for the GNSS group is 1.8 (SD = 0.85) and that of the RTK group is 1.21 (SD = 0.89). The results show a significant difference (t(51) = 2.45, p = 0.018).
### Exploration
#### 5.4.1 Distance walked
As shown in Figure 12, the GNSS group walked an average distance of M = 586.15 (SD = 96.24) meters, whereas the RTK group walked an average distance of M = 525.94 (SD = 71.9) meters. The results show a significant difference (t(51) = 2.59, p = 0.013).
#### 5.4.2 POIs visited
The GNSS group visited an average of M = 21.09 (SD = 4.02) POIs, whereas the RTK group visited an average of M = 19.29 (SD = 5.87). The results _do not_ show any significant difference (t(51) = 1.30, p = 0.199).
#### 5.4.3 Map opened
The GNSS group opened the 2D map M = 2.83 (SD = 2.24) times in average, whereas the RTK group opened it M = 1.91 (SD = 2.41) times. The results _do not_ show any significant difference (t(51) = 1.44, p = 0.157).
### Focus
The GNSS group spend an average M = 73.3% (SD = 9.81) of the time looking at the tablet screen. The RTK group spend an average M = 69.2% (SD = 12.4) of the time looking at the tablet screen. The results _do not_ show any significant difference (t(51) = 1.16, p = 0.251).
## 6 Conclusions
The purpose of the study was to assess the impact of geolocation data on the usability of our location-based AR system. To test our hypotheses, we exposed the participants to different geolocation data sources with significantly different accuracies. While we expected RTK data to be more accurate and that it
Figure 11: UEQ scores by group.
Figure 12: Distance walked by group.
Figure 10: HARUS scores by group.
Figure 9: SUS scores by group.
would enable us to improve usability, analysis highlights that it was significantly less accurate and less continuous than GNSS data. This appears to be due to the fact that the embedded GNSS sensor contains filters that preprocess data and remove most of the outliers. In contrast, RTK data purposefully remains \"raw\", which is valuable for an advanced user. RTK data accuracy is very efficient when used on an isolated basis (ie. at a 2D map scale), but not particularly suitable for a real-time continuous usage (where location is measured several times per second) on a 1:1, tridimensional scale, at least without any filters applied onto it. Despite this contingency, both the quality and continuity of the geolocation data accuracy the two groups were exposed to was significantly different, which is the essential premise for testing our hypothesis and addressing our research questions. Regarding our main research question, results reveal that the GNSS group, who used the AR application in combination with more accurate and continuous data, reported higher scores in all usability scales, of which five out of nine were statistically significant. This supports our initial hypothesis that poor data accuracy negatively impacts the usability of a location-based AR system. Futures studies should however investigate whether RTK data with proper outlier processing may actually better usability. Our results further highlight that the GNSS group walked more than the RTK group, revealing that the accuracy of geolocation data was partially related to exploration, at least for the quantity indicator. However, due to the manual removal of the outliers-which were significantly more frequent in the RTK group-from the trajectories, the data could be biased. It would be necessary to record a trajectory with both modalities, remove the outliers and observe if there are not significant difference between the measurements to ensure that there are no bias. The comparison on the exploration diversity indicator (amount of POIs visited) was not significantly different. Additionally, although the difference was not significant, the GNSS group opened the 2D map more often than the RTK group in average, suggesting the RTK group could have had more ease exploring. Our results further highlight that there were no significant difference between the ratio of time participants from each group spent interacting with the tablet screen, which would indicate that there is no particular relation between the accuracy of geolocation data and focus.
Although the two experiments cannot be properly compared, because the tests took place 5 years apart under different conditions, we note that participants spent 69.2%-73.3% of the time looking at the tablet screen, which seems to be a meaningful longitudinal progress from the measurement that was made on our 2017 proof-of-concept, where participants interacted with the screen for 88.5 % of the time ([PERSON] et al., 2018). While we are not aware of a method to determine the ideal proportion, this measure overall remains an interesting indicator of the importance of the tablet in this type of activity. In a wide review of mobile learning projects, technology was found to dominate the experience in a problematic way in 70% (28/38) of the cases ([PERSON] et al., 2006). While using RTK data did not allow us to positively impact the usability of our system, our study however demonstrated the impact of varying geolocation data accuracy on usability and exploration. The immediate benefit of performing this comparative study is for us to define the most suitable conditions of use before offering our system to a young audience, as well as to ensure an adequate overall level of usability. The overall score reported by the GNSS group allows us to qualify the application's usability as \"excellent\" according to the SUS adjective scale (Bangor, 2009).
## 7 Acknowledgements
The authors thank [PERSON] for his help with the organization of the tests and the eye tracking data collection. Study participation was voluntary, and written informed consent to publish this paper was obtained from all participants involved in the study. Participants were informed that they could withdraw from the study at any point. The data presented in this study is openly available on Zenodo at [[https://zenodo.org/record/7845707](https://zenodo.org/record/7845707)]([https://zenodo.org/record/7845707](https://zenodo.org/record/7845707)). This research was funded by the Swiss National Science Foundation (SNSF) as part of the NRP 77 \"Digital Transformation\" (project number 407740_187313) and by the University of Applied Sciences and Arts Western Switzerland (HES-SO): Programme strategique \"Transition numerique et enjeux societaux\". The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.
## References
* [PERSON] et al. (2011) [PERSON], [PERSON], [PERSON], [PERSON] t., 2011. The concept of flow in collaborative game-based learning. _Computers in Human Behavior_, 27(3), 1185-1194. doi.org/10.1016/j.chb.2010.12.013.
* [PERSON] et al. (2022) [PERSON], [PERSON], [PERSON], 2022. A Review of Extended Reality (XR) Technologies in the Future of Human Education: Current Trend and Future Opportunity. _Journal of Human Reproductive Sciences_, 1, 81-96. doi.org/10.11113/humentech.v1n2.27.
* [PERSON] et al. (2021) [PERSON], [PERSON], [PERSON], [PERSON], 2021. Mobile Augmented Reality and Outdoor Education. _Builil Environment_, 47(2), 223-242. doi.org/10.2148/penv.47.2.223.
* [PERSON] et al. (2014) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2014. Augmented Reality Trends in Education: A Systematic Review of Research and Applications. _Journal of Educational Technology & Society_, 17(4), 133-149. jstor.org/stable/jedotechosci.17.4.133.
* JUX. _JUX
- The Journal of User Experience_. uxpajournal.org/determining-what-individual-sus-scores-mean-adding-an-adjective-rating-scale/.
* [PERSON] et al. (2010) [PERSON], [PERSON], [PERSON], [PERSON], 2010. Promoting the Use of Outdoor Learning Spaces by K-12 Inspective Science Teachers Through an Outdoor Professional Development Experience. [PERSON], [PERSON], [PERSON] (eds), _The Inclusion of Environmental Education in Science Teacher Education_, Springer Netherlands, Dordrecht, 97-110.
* [PERSON] and [PERSON] (2013) [PERSON], [PERSON] [PERSON], 2013. A mixed methods assessment of students' flow experiences during a mobile augmented reality science game. _Journal of Computer Assisted Learning_, 29(6), 505-517. doi.org/10.1111/jcal.12008.
* [PERSON] (1996) [PERSON], [PERSON], 1996. SUS: A 'Quick and Dirty' Usability Scale. _Usability Evaluation In Industry_, CRC Press.
* [PERSON] et al. (2013)[PERSON], [PERSON], [PERSON], [PERSON], 2014. An Augmented Reality-based Mobile Learning System to Improve Students' Learning Achievements and Motivations in Natural Science Inquiry Activities. _Journal of Educational Technology & Society_, 17(4), 352-365. jstor.org/stable/jeductechosci.17.4.352.
* [PERSON] et al. (2015) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON] [PERSON], 2015. Preparing augmented reality learning content should be easy: UNED ARLE-an authoring tool for augmented reality learning environments. _Computer Applications in Engineering Education_, 23(5), 778-789. doi.org/10.1002/cae.21650.
* [PERSON] et al. (2018) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2018. Enhancing cultural tourism by a mixed reality application for outdoor navigation and information browsing using immersive devices. _IOP Conference Series: Materials Science and Engineering_, 364, 012048. doi.org/10.1088/1757-899X/364/1/012048.
* [PERSON] et al. (2020) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2020. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. doi.org/10.48550/ARXIV.2010.11929.
* [PERSON] et al. (2009) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2009. Affordances and Limitations of Immersive Participatory Augmented Reality Simulations for Teaching and Learning. _Journal of Science Education and Technology_, 18(1), 7-22. doi.org/10.1007/s10956-008-9119-1.
* [PERSON] (2020) [PERSON], 2020. _Augmented reality in education: a new technology for teaching and learning_. Springer International Publishing.
* [PERSON] et al. (2006) [PERSON], [PERSON], [PERSON], 2006. _The Focus Problem in Mobile Learning_. IEEE, Athens.
* [PERSON] et al. (2018) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2018. _Augmented reality technologies for biodiversity education--a case study_. 12-15 June 2018.
* [PERSON] et al. (2008) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2008. _Construction and Evaluation of a User Experience Questionnaire_. Lecture Notes in Computer Science, Springer, Berlin, Heidelberg.
* [PERSON] et al. (2012) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2012. _CityWeAR: A mobile outdoor AR application for city visualization_.
* [PERSON] et al. (2023) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2023. BiodivAR: A Cartographic Authoring Tool for the Visualization of Geolocated Media in Augmented Reality. _ISPRS International Journal of Geo-Information_, 12(2), 61. doi.org/10.3390/ijgi12020061.
* [PERSON] et al. (2011) [PERSON], [PERSON], [PERSON], [PERSON], 2011. Research Note: The Results of Formatively Evaluating an Augmented Reality Curriculum Based on Modified Design Principles. _International Journal of Gaming and Computer-Mediated Simulations (IJGCMS)_, 3(2), 57-66. doi.org/10.4018/jgems.2011040104.
* [PERSON] et al. (2017) [PERSON], [PERSON], [PERSON], 2017. An adoption framework for mobile augmented reality games: The case of Pokemon Go. _Computers in Human Behavior_, 76, 276-286. doi.org/10.1016/j.chb.2017.07.030.
* [PERSON] and [PERSON] (2013) [PERSON], [PERSON], 2013. Off the paved paths: Exploring nature with a mobile augmented reality learning tool. _Journal of Mobile Human Computer Interaction_, 5(2), 21-49. doi.org/10.4018/jmhci.2013040102.
* [PERSON] et al. (2014) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2014. _A usability scale for handheld augmented reality_. VRST '14, Association for Computing Machinery, New York, NY, USA.
* [PERSON] (2015) [PERSON], 2015. _User Experience Questionnaire Handbook_.
* The jamovi project (2022) The jamovi project, 2022. jamovi Software, Version 2.3. jamovi.org.
|
isprs
|
IMPACT OF GEOLOCATION DATA ON AUGMENTED REALITY USABILITY: A COMPARATIVE USER TEST
|
J. Mercier, N. Chabloz, G. Dozot, C. Audrin, O. Ertz, E. Bocher, D. Rappo
|
https://doi.org/10.5194/isprs-archives-xlviii-4-w7-2023-133-2023
| 2,023
|
CC-BY
|
isprs/e3aa18a8_2d3d_4c16_8341_9e5775edee64.md
|
Application of Surface Deformation Monitoring in Mining Area by the Fusion of Insar and Laser Scan Data
[PERSON]
Corresponding author
[PERSON]
Corresponding author
[PERSON]
Corresponding author School of Environment Science and Spatial Informatics, China University of Mining and Technology, Xuzhou 221116, China [EMAIL_ADDRESS]
[PERSON]
School of Environment Science and Spatial Informatics, China University of Mining and Technology, Xuzhou 221116, China [EMAIL_ADDRESS]
###### Abstract
Differential Synthetic Aperture Radar Interferometry (D-InSAR) as a new earth observation technique has become an important tool for monitoring ground movements caused by underground coal mining. However, the low resolution and accuracy of Digital Elevation Model (DEM) cause more error value of InSAR line-of-sight(LOS) surface deformation measurement. In this paper, a couple of Radarsat-2 and a pair of TerrasSAR with SRTM, GDEM and LiDAR DEM are processed respectively to reveal the subsidence basin and the results have been compared each other. It illustrate that the accuracy of D-InSAR results been better improved by the high accuracy and resolution DEM.
D-InSAR, laser scan data, SRTM, GDEM, deformation, data fusion +
Footnote †: Corresponding author
## 1 Introduction
Ground surface movements commonly cause disturbance and damage for buildings and environment around the subsidence areas. The knowledge and prediction of the evolution of the temporal and spatial distribution of the movements are essential to delineate the most affected areas, to understand the mechanisms involved, and to establish counter measures to prevent damages. Underground mining activities always cause subsidence of the ground surface due to the advance of the excavation fronts and the progressive closure or collapse of the mineral extraction galleries. The magnitude of the displacements depends on different parameters, such as the depth of the mining galleries and the time elapsed since the onset and/or the abandonment of the excavation ([PERSON], 1995; [PERSON], 1995). The evolution of the subsidence of a point has been described by many authors through total station, levels and global positioning system etc.
In recent years, a new earth observation technique named Differential Synthetic Aperture Radar Interferometry (D-InSAR) has become an important tool for monitoring temporal and spatial ground movements. This method has clear advantages over classical monitoring methods, the first one is its high spatial coverage. Classical techniques measure ground displacements at a few discrete points while D-InSAR provides a more complete pattern of the displacements' field with measurements over a wide area. Another advantage of the technique is the existence of a historical database of SAR images, which was started more than several decades ago, that enables the study of past situations([PERSON] 2013).Differential Interferometric Synthetic Aperture Radar (D-InSAR) is a technique for utilizing the phase difference between two SAR images acquired before and after the event with different look angles and a topographic signal from a DEM as correction to reveal the surface subsidence between acquisition times of two images. These techniques have been successfully applied to detect and measure ground subsidence in areas subjected to underground mining exploitation ([PERSON] et al., 2008; [PERSON] et al., 2009).In traditional, a resolution of 90m SRTM (Shuttle\(\mathcal{Q}_{noise}\) = the noise term due to variability in scattering from the pixel, thermal noise and coregistration errors
How to get \(\mathcal{O}_{disp}\) is the problem to be resolved by D-InSAR technique, and \(\mathcal{O}_{popo}\) is the main factor to be removed.
### SRTM data
The NASA Shuttle Radar Topographic Mission (SRTM) has provided digital elevation data (DEMs) for over 80% of the globe([PERSON], [PERSON], 2008).The SRTM digital elevation data, is a major breakthrough in digital mapping of the world, and provides a major advance in the accessibility of high quality elevation data for large portions of the tropics and other areas of the developing world. The SRTM data is available as 3 arc second (approx. 90m resolution) DEMs. A 1 arc second data product was also produced, but is not available for all countries. The vertical error of the DEM's is reported to be less than 16m.As shown in Figure 3(a), the SRTM is been resized to 3 meters in one pixel.
### GDEM data
On June 29, 2009, NASA and theMinistry of Economy, Trade, and Industry (METI) of Japan released a Global DigitalElevation Model (GDEM) to users worldwide at no charge as a contribution to theGlobal Earth Observing System of Systems(Aster, 2009). This GDEM was found to have an overall accuracy of around 20-m at the 95 percent (%) confidence level.NASA and METI released a second version of the ASTER GDEM (GDEM 2) in mid-October, 2011.GDEM 2 is a 1 arcsecond elevation grid divided and distributed as 1\({}^{\circ}\)x 1\({}^{\circ}\) tiles, has an overall accuracy of around 17-m at the95% confidence level, and a horizontal resolution on the order of 75-m. The GDEM over the study area which has been resized to 3 meters in one pixel is shown in Figure 3(b).
### LiDAR data
Light Detection and Ranging(LiDAR) Digital Elevation Model(DEM) data over the caving face is mosaicked by 13 filtered images of point cloud monitored by Lecia Scnstation C10.The vertical accuracy is about 0.2m and the horizontal accuracy is 0.5m. According to the resolution of TerrASAR and Radarsat-2, the LiDAR DEM is about 3 meters by 3 meters each pixel. The LiDAR DEM is shown in Figure 3(c).
### SAR data
Two imagines of TerraSAR of X-Band acquired On April 18, 2015 and April 29, 2015 with 11 days interval and two imagines of Radarat-2 of C-Band acquired On April 4, 2015 and April 28, 2015 with 24 days interval are used in this study. The baseline of a couple of TerraSAR imagine which the resolution is 3.3 meters in azimuth and 2.6 meters in range is about 53 meters and Radarsat-2 which the resolution is 2.9 meters in azimuth and 2.6 meters in range is 98 meters.
## 3 Results and Disscution
The couple of Radarsat-2 imagines in24 daytime interval imaged on April 4, 2015 and April 28, 2015 and the couple of TerraSAR images in 11 days time interval imaged on April 18, 2015 and April 29, 2015 are processed with different DEM (such as SRTM, GDEM and LiDAR DEM) by Sarscape module of Environment for Visualizing Images (ENVI)software. The processing is shown in figure 1 and the results shown in Figure 2.
The results generated by the couple of Radarsat-2 imagines in 24 days time interval with different DEM are shown in Figure 2(a),(b),(c).The maximum surface subsidence is about -0.06 meter in Figure 2(a), -0.061 meter in Figure 2(b) and -0.066 meter in Figure 2(c).Figure 2(d),(e),(f) which shown are the results of the couple of TerraSAR images in 11 days time interval with different DEM. In Figure 2(d) the maximum surface subsidence is about -0.043 meter, in Figure 2(e) is about -0.043 meter and -0.044 meter in Figure 2(f).The subsidence basin caused by coal mining is clearly shown in the top right corner of Figure 2, either generated by Radarsat-2 or TerraSAR.
Compared with Figure 2(a),(b)and(c), the error value of InSAR line-of-sight(LOS) surface deformation measurement is less in Figure 2(c).This is because the LiDAR DEM is more accuracy than SRTM and GDEM. In Figure 3, it is clearly that the maximum elevation of GDEM is higher than SRTM and LiDAR DEM. Maybe the quality of GDEM in this study area is the worst compared with SRTM and LiDAR DEM. From the results generated by Radarsat-2 and TerraSAR, it found that the area where uplift are larger in TerraSAR than in Radarsat-2. It is because that the noisy caused by surface vegetation is more sensitive in X-Band than in C-Band.
Figure 1: Processing of D-InSAR in Sarscape module
As mentioned in [PERSON]([PERSON], X., 2012), the accuracy of InSAR line-of-sight(LOS) surface deformation measurement depend on the accuracy of geography. The \(\Delta r\) is defined by
\[\Delta r\ =\ \frac{\B_{\perp}}{\rho\ \mathrm{s\,in\,}\theta}\ \partial H \tag{2}\]
where \(\Delta r\) = the accuracy of InSAR line-of-sight(LOS) surface deformation measurement
\(\B_{\perp}\) = the perpendicular baseline
\(\rho\) = the range from antenna in the satellite to target on the earth surface
\(\theta\) = the incidence angle
Figure 3: Different DEM figures with resolution of 3 meters (a) SRTM (b) GDEM (c) LiDAR DEM
Figure 2: Displacement figures of Radarast-2 and TerraSAR with different DEM
\(\partial H\) = the error value of geography
In this study, the relation between \(\partial H\) and \(\Delta I\) of TerraSAR with baseline about 98 meters and Radarsat-2 with baseline about 53 meters are shown in Figure 4(a), Figure 4(b) respectively.
The error value of InSAR line-of-sight(LOS) surface deformation measurement increase as well as the error value of geography. Take Radarsat-2 for example, when the error value of geography is about 10 meters, the error value of LOS is about 1.5 millimeters with the baseline about 98 meters in this study. In Figure 2, the error value of LOS caused by the error value of geography is more serious for Radarsat-2 than for TerraSAR, the reason is that the baseline of Radarsat-2 is 98 meters and the TerraSAR is only 53 meters and the error value of LOS is more sensitive to long baseline than short baseline.
## Acknowledgements
The authors would like to acknowledge Mr [PERSON], the third surveying and mapping institute, Hebei Bureau of Geoinformation, for providing the laser scan data. This work was supported by\"Geographical Conditions of Service Oriented the Surface Subsidence Monitoring caused by Resources Exploitation Program(No. 201412016)\"from National Administration of Surveying, Mapping and Geoinformation, China.
## References
* [PERSON] (1995) [PERSON], 1995.,Effects of Mining Subsidence Observed by Time-lapse Seismic Reflection Profiling. University of Durham..
* [PERSON] (2009) [PERSON], 2009.G. D. E. M. Validation Team. ASTER Global DEM.
* [PERSON] (1995) [PERSON], 1995. Subsidence studies in Indian coalfields by a semi-empirical approach. Proceedings of the Fifth International Symposium on Land Subsidence, The Hage, pp. 127-133.
* [PERSON] (2013) [PERSON], 2013.Large-scale deformation monitoring in mining area by D-InSAR and 3D laser scanning technology integration. International Journal of Mining Science and Technology, 23(4), 555-561.
* [PERSON] (2011) [PERSON], 2011. Land subsidence monitoring by D-InSAR technique, Mining Science and Technology (China), 21(6), 869-872.
* [PERSON] (2009) [PERSON], 2009. Monitoring residual mining subsidence of Nord/Pas-de-Calaiscool basin from differential and Persistent Scatterer Interferometry (Northern France).J. Appl. Geophys. 69 (1), 24-34.
* [PERSON] (2012) [PERSON], 2012. Earth observation data processing method by InSAR and the comprehensive measurement. Science Press, China.
* [PERSON] (2008) [PERSON], 2008.Hole-filled SRTM for the globe Version 4.available from the CGIAR-CSI SRTM 90m Database ([[http://sfrm.csi.cgi.org](http://sfrm.csi.cgi.org)]([http://sfrm.csi.cgi.org](http://sfrm.csi.cgi.org))).
* [PERSON] (2008) [PERSON], 2008.Application of DINSAR and GIS for underground mine subsidence monitoring. Int. Arch Photogramm Remote Sens. Spot. Inf. Sci. 37, 251-256.
* [PERSON] (2014) [PERSON], 2014.Analysis of the evolution of ground movements in a low densely urban area by means of D-InSAR technique. Engineering Geology, 170, 52-65.
Figure 4: The relation betweenthe accuracy of InSAR line-of-sight(LOS) surface deformation measurement andthe error value of geography for (a) TerraSAR and (b) Radarsat-2.
|
isprs
|
Application of Surface Deformation Monitoring in Mining Area by the Fusion of InSAR and Laser Scan Data
|
J. L. Huang, K. Z. Deng, H. D. Fan, J. K. Yang
|
https://doi.org/10.5194/isprsarchives-xl-7-w4-41-2015
| 2,015
|
CC-BY
|
isprs/0f32a438_cc64_454f_a55b_25fc1a423892.md
|
# Typology of Historical Houses in Muzzaffarid Era: Case Study of Ardakan City, Yazd, Iran
[PERSON]
Corresponding author
###### Abstract
MOZAFFARIDS established the [PERSON] dynasty in Yazd, Iran. This era witnessed a development in architectural and decorative features of Yazd buildings. Ardakan, in particular, enjoyed a period of prosperity in the 14 th century, which led to a flourishing growth of architectural production. The present article uses a descriptive-analytical and historical-comparative method to investigate the typology of 12 historical houses of Ardakan city in the Muzzaffarid era. By using literature review and field studies, four of these houses have been studied in detail in terms of architectural and decorative features and construction methods. The results of the study show that Mozaffarid houses in Ardakan have certain and distinguishable patterns and follow a general rule. Main Iwan as an outstanding feature in Mozaffarid houses, as well as a central courtyard and a Soffich in front of the Iwan, repeated in all houses and other parts, are formed in their surroundings. With the change in the location of the main Iwan in the northern or southern part of the central courtyard and the fact that whether or not there is a garden, significant differences in organization and the quality of spaces have been made. Mozaffarid houses in Ardakan can be described as two main types each of which can be divided into two subcategories based on the Iwan position. The knowledge of typological characteristics of these historical architecture needs to be gathered to preserve the built heritage and a comprehensive document is essential for the preservation and conservation of the houses.
H +
Footnote †: Corresponding author
## 1 Introduction
Historical vernacular housing has always been designed with respect to nature; incorporating and reflecting the local lifestyle and cultural conditions as well as being a direct expression of the state of know-how of construction techniques, the availability of local construction materials and local climatic ([PERSON], 1976). Today, historical settlements and their rather homogeneous housing typologies can still be found and studied in the case of preserved contexts and buildings in Iran. Iran is a rich country in vernacular architecture. Despite the losses due to frequent earthquakes and large-scale planning projects, historical towns still contain thousands of houses. Until recently, there have been little attempts to record Iranian vernacular buildings; even less to analyze or explain their architecture. The houses before the [PERSON] era have been the most unknown ones in comparison to other eras. So, the present study aims to investigate the historical houses in the Muzzaffarid era, which is one of the most significant eras of Yazd history. Ardakan is one of the oldest cities of Iran containing a lot of old houses in its historical context. Its historical houses, having remained largely undocumented, are the most important samples to represent the lifestyle of the past. In Ardakan County, some elegant samples of Muzzaffarid houses have been identified needing detailed investigation. This article aims to investigate the typology of Ardakan historical houses in the Mozaffarid era. In this regard, the spatial organization of 12 historical houses located in the historical context of Ardakan was studied four of which are elaborated in terms of architectural, decorative features, and construction methods. These buildings have features, which are the same in most samples and are unique to the architecture of the era; while there is considerable variation, spatially and physically, from one house to another. Thus, this paper is supposed to classify the historical houses of Ardakan based on the typological method.
### Muzzaffarids and Architectural Legacy in Yazd
The **[PERSON]** (Al-e Mozaffar) is a Sunni family coming to power in central Iran in the fourteenth century and the family of governors of Yazd under the [PERSON] (1256-1335/1353), who expanded their domain after the collapse of the Il-Khanid power and established the [PERSON] dynasty in Yazd, Kerman, Fars, and Erda-e Ajam. The [PERSON], enduring until its destruction by [PERSON] ([PERSON]) in 795/1393, originated as an Arab family settling in Khorasan. They stayed in Khorasan up until the Mongol invasion of that province when they fled to Yazd. Serwing under the [PERSON], they gained prominence when [PERSON] was made the governor of Maybod ([PERSON], 2014; New World Encyclopedia contributors, 2009). [PERSON] says that the Muzzaffarids \"are remembered as cultural patrons\" ([PERSON], 2007). Muzzaffarid era in Yazd has been one of the most important ears in the history of the region as it was for the first time that a dynasty ruled the southern and central parts of Iran for more than a half-century ([PERSON], 1993). In this era, most of the artists and scientists settled in Yazd to avoid Mongol invasion and to pursue their academic attempts. Partial security and peace in Yazd and the attention of Il-Khanids to Muzzaffarids led to the high scientific artistic interaction between these two governments. Moreover, the collection of scientists and artists in Yazd, as well as communication about architectural techniques and decorations, led to the development of architecture in this era. The special features of the architecture and decorations of buildings in Muzzaffarid era led to the creation of a local school of thought, or style, of architecture. It is called Muzzaffarid school of thought or Yazdi style ([PERSON], p.101). The distinctive feature of the Muzzaffarid style was the use of \"large transverse arches\" supporting \"barrel vaults\" such as those added to the mosque at Yazd ([PERSON], 1996). Muzzaffarids made a sizeablenumber of personal and charity buildings especially in Yazd, Meybod, and Ardakan some of which still exist. Although the Muzaffarid rulers did not earn the type of fame that makes their names universally known, the dynasty did give its name to culture and architecture.
## 2 Materials and Methods
### Geographical and Historical area of study
The geographical area of this study is Ardakan County, which is the second major city of Yazd Province located on the north side of the province and in the middle of the central desert of Iran (Figure 1). The proximity of Ardakan to the central desert of Iran has led to the high effect of desert weather on this region; winters are cold with low precipitation and the summers are hot and dry. The per capita of precipitation is 62.9 mm and the average temperature is 20.2 degrees. Lack of water is one of the most serious limitations in the city.
In Muzaffarid era, Ardakan was one of the villages of Meybod city ([PERSON], 1966, p. 160) and the 14 th century was one of the most decisive years for Ardakan due to the ruling of Muzaffarids in Meybod during which construction boomed. Generally; being located in the center of Iran and far from the boundaries, partial and consecutive security throughout the history of the region, benefiting from conservative peaceful rulers, dry and unfavorable weather, and being protected from natural disasters like flood and earthquake have saved the region from subversive happenings and made Ardakan county be protected from some of the rare samples of architecture in this era ([PERSON], [PERSON], 2013, p. 105). In figure 2, the historical urban fabric of Ardakan and the location of the four houses mentioned in the study are shown.
### Methodology
By definition, typification is the action of typifying, i.e., dividing/distinguishing into types. The concept of type refers to the overall or the set of properties common to some individuals or objects recognizing structural similarities between architectural objects ([PERSON] et al., 2013). According to [PERSON] (1999), a type is the organic ensemble of the common characteristics of buildings in a defined culture over a specific period. The methodology used involved quantitative and qualitative analysis of the building typology of Muzaffarid houses in Ardakan. In this study, understanding the location and position of spaces and architectural elements especially unique spaces of the Muzaffarid houses including Iwan, Soffeh, and garden were intended. To do so, by using a descriptive-analytical research method and studying the spatial organization diagrams as well as studying literature review and field studies; the typology of 12 houses is recognized. Four of these houses were thoroughly analyzed according to fundamental spaces, materials, construction techniques, and decorations. Also, the various spatial characteristics are clarified by the use of graph representation, dimensionless plans, and axial diagrams.
## 3 Results and Discussion
### Mozaffarid Houses in Ardakan
Some parts of the Muzaffarid cultural heritage, which are of high importance in architectural features and are identified in some historical neighborhoods of Ardakan city are Muzaffarid houses. The houses in this era, as the oldest remained houses in Iran, reveal the construction pattern formed in the 14 th century, which continued until the [PERSON] dynasty. The general pattern of studied houses has been repeated despite some differences in the location of spaces. All of these houses are built around a small central courvard. There is an Iwan in one of the southern or northern sides of the central courvard and on the opposite side, there is a Soffeh. On the east and west sides, there are 2 small Soffeh and two doorways. One of the doorways connects the entrance corridor to the courvard and the other one is the doorway of a room. In the back part of the main Iwan, there is a Tanabi room or a garden and on each eastern and western side of that, there are rooms in two stories. Since visual protection is critical because of privacy, attention is also given to the patterns of entry and access to and from the central courvard. In table 1, the data regarding all identified Muzaffarid houses in Ardakan have been displayed. This table consists of the information about all samples of the investigation. In these diagrams, the main parts such as semi-open spaces (Soffeh and Iwan), open spaces (the central courvard and garden), and close spaces such as service spaces, living spaces, and adjunct spaces have been shown by using different colors. The vertical axis represents the northeast-southwest direction.
### Architectural features (Spatial and Functional Organization)
In this section, characteristics of site and main spaces as well as seven functional features of plan are analyzed and evaluated as it is shown in Tables 2 and 3. The most important spaces of investigated houses include entrance corridor and Pishgah (Entrance hall), Main Iwan, Soffeh (in front of Iwan), the space behind the Iwan (Tanabi or Soffeh), the courvard and garden, and the western and eastern rooms. These houses usually lack the basement.
Figure 1: Location of Yazd Province in Iran; Location of Ardakan city in Yazd Province.
Figure 2: Location of historical urban fabric in Ardakan city; Distribution of studied Muzaffarid houses in historical urban fabric of Ardakan.
**- The Characteristics of the Site:** In all investigated houses, except for Asari house, the land is rectangular and in the north-south axis with a 10 to 12 degree of deviation to east. This orientation in accordance with Room Raste1 in a way that the longer side of the land is in the north-south axis and the shorter is in the east-west axis. This orientation is predominant in the historical urban fabric of Adrakan. As well as different climatic factors, the orientation has been affected by the rules of farms and gardens irrigation network. As the lands in Ardakan has south to north slope, Qanats2 flow from south to north and the farmlands direction is the same ([PERSON], [PERSON], 2007, p. 165).
Footnote 1: In Iranian architecture, Room refers to the direction of the building. Room Raste stands in the northeast-southwest direction.
Footnote 2: A gently sloping underground channel to transport water from an aquifer or water well to surface for irrigation and drinking.
- **Anfrance Corridor and Pishgah:** All these houses have a Pishgah. In some cases, however, the Pishgah has been destroyed or had a small area. The pishgah is often simple without much decoration. After the Pishgah, there is a corridor making it available to access service areas, stables, and staircase leading to the roof. With one or some 90-degree turns, the corridor connects the Pishgah to the central courtyard. The common point about all these houses is that one can enter the courtyard only from the eastern or western side of that.
- **Main Ivan:** Muzaffarid Ivan is the oldest Iwan in Iranian traditional houses remaining firm and stable until now ([PERSON], [PERSON], 2013, p. 203). This Iwan, which is known as the most important and outstanding architectural element in Muzaffarid houses is taller than Iwans of the upcoming eras and it forms a long vertical rectangle. The height of the Iwan is between 7 to 9 meters in the investigated houses, which is 2.2 to 2.8 larger than the span.
\begin{table}
\begin{tabular}{c c c c c c c} & House name & Amin House & Shorkaai House & Pourrahimi House & Aboutable House \\ & Entrance &
The width of the Iwan is the same as the central courvard and the depth is almost the same as the length of later. The Iwan occupies almost the same area as the courvard. The construction of Badgir (wind catcher) has not been common in the Muzzaffarid era and these houses do not benefit from Badgir. The long narrow Iwan on the top of the small courvard acts as natural ventilation and transfers the wind into the courvard. However, in some houses, there are Badgir being built in the next eras. Iwan also acts as the divisonary space to access some of the areas.
- **Soffeh in front of Iwan and the Room behind it:** On the opposite side of the main Iwan, there is a small Soffeh accessing to which is possible through the door located on the Espara8 of the Soffeh. Behind this Soffeh, there is a long room perpendicular to the courvard. The height of this side of the house is at the same level as the western and eastern rooms.
- **The space behind the main Iwan:** In the investigated houses, the space behind the main Iwan is often a Tanabi or a Soffeh overlooking a garden on the north or south of the land. In some cases, there is no space behind the main Iwan. The entrance doorway to this area is located on the Espar of the Iwan.
- **The rooms adjacent to the main Iwan:** In the east and the west side of the main Iwan, there are two rooms, the accessibility to which is possible through the doorways located symmetrically on two side walls of the Iwan. These rooms are connected to the courvard through Iwan and space lighting is provided by the doorway openings. In houses where behind the main Iwan is a Soffeh overlooking the garden, lighting of the rooms adjacent to the Iwan is provided by the rooms adjacent to this Soffeh. On top of these rooms, there are 2 more rooms on the first floor, which are at the same height as the Iwan. These rooms have a structural role and act as flying buttress. They are mainly used as the food or goods depot and can only be accessed through the stars in the entrance corridor. The west and the east spaces of Iwan, which are symmetrical are the only two-story part of the house.
- **Yard:** All of the studied houses are arranged around a central private courvard where family activities occurred. In the whole area of the house, a small area is allocated to the central courvard, which is almost 3 to 6 percent ([PERSON], 2007, p. 170). The central courvard acts as the heart of the traditional dwelling and connects all spaces including closed, open, and semi-open to each other. None of the houses has a pond or flowed in the central courtyard. The whole spaces of the building can be accessed by a 20-centimeter stair upper the central courvard. The main concept behind designing the central courvard house was to generate an inward-looking plan with plain external walls, which were designed to discourage strangers from looking inside the house as well as to protect the house from the harsh climate of the region ([PERSON] et al., 2006; [PERSON], 2006).
In some houses of this era, in addition to the central courvard, there is a garden behind the main Iwan playing an important role in ventilating the house. In all these houses, the garden has palm trees with non-original ponds. This yard is located along the Iwan, the courvard and the Soffeh and has emphasized the north-south axis of houses in Ardakan.
### Construction Method (Material and technique)
This section presents a review of the construction systems and materials (Table 4). The materials used in Ardakan's Muzzfarid houses are totally in harmony with the hot and arid climate. All windows and doors have been built of wood. The main building material of all houses is adobe and they are constructed based on load-bearing walls. In addition to the load-bearing role of thick walls, the width of the walls acts as a thermal mass, getting solar energy during the day and giving it back during the night to create a balance in temperature. In the architecture of Muzzfarid houses in Ardakan, vault structural systems have been common. Arched ceilings, in a variety of shapes (vaults, arches, Tavizeth?), have often been used, which were rarely decorated with the patterns. Also, Karbandy10 was used in transferring the area of a dome in one house (Aboutable House). The flat roof has been used sometimes in some parts such as in a barn (the room adjacent to the Iwan on the first floor, [PERSON] and [PERSON] house).
Footnote 10: The bearing strips of the arched ceilings to transfer the compressive loads to the side walls.
Footnote 10: Or Ribed Vault consisting of arches with geometric rules under the original cover intersect.
### Decorative Features
In the Muzzfarid era, housing of which were highly decorated was flourished by the well-known citizens. Muzzfarid houses in Yazd have been decorated in different way, however, mud decorations have been of high importance in the houses. Some examples are mud wall sculptures and mud muugarnas12 and shamseh13. Mud decorations have special delicacy and elegance, so they are just used in rare cases. The abundance of these decorations has not been the same in Yazd
\begin{table}
\begin{tabular}{c c} \hline \hline \multicolumn{1}{c}{[[https://doi.org/10.5194/sports-archives-XLV/M-1-2020-945-2020](https://doi.org/10.5194/sports-archives-XLV/M-1-2020-945-2020)]([https://doi.org/10.5194/sports-archives-XLV/M-1-2020-945-2020](https://doi.org/10.5194/sports-archives-XLV/M-1-2020-945-2020))} \& Authors 2020. CC BY 4.0 License. 950 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Construction Method of 4 case studies of Muzzfarid Houses. Reference: Author.
province and no evidence of the mud decorations has been found in Muzaffarid houses of Ardakan County. Different methods have been utilized to decorate Muzaffarid houses in Ardakan, Ardakan, which can be observed in most of the recognized houses (Table 5). A group of these methods is simple and basic including decorative strip frames under the arch's springing line, the use of Kalli14 arches, and decorative Taghaman15 for vaults. The simplicity of implementation, which was possible with low cost and accessible tools, can be mentioned as the reason for prevalence of these methods. The other group is not as common as the first, although it is seen in some cases. Among the investigated cases, using gypsum decorations (such as lattice windows and decorative frames), wooden lattice windows, and karbandi can be placed in this category.
Footnote 14: A low-height Iranian arch, a combination of Muzchdar and Tiezhdar arch.
Footnote 15: False arch, having the appearance of an arch though not of arch construction.
### Building type classification
According to the analysis of the Muzaffarid houses in Ardakan, it can be seen that the architecture and organization of spaces in these houses follow a general pattern. All of these houses consist of spaces including Pishgah and entrance corridor, a main Iwan, Soffeh in front of the Iwan, rooms on the eastern and western part, and a central courtyard. Also, historical architectural evidences show that the triple combination of the main Iwan, the central courtyard and the Soffeh in front of the Iwan with the north-south axis is repeated in all Muzaffarid houses of Ardakan. Other spaces of the house have been formed surrounding them. The common architectural pattern of these houses is introvert due to the central courtyard. Some houses have a garden too, which is on the back of the main Iwan; thus, they have a semi introvert-semi extwrovert pattern. Other general characteristics of this houses can be extended to the elongation of the building in Room Raste direction, a mass of building in four geographical directions, the two-story part on the west and the east sides of the Iwan, a courtyard level close to the public passage level, multiple 90 degrees turns in the entrance corridor for security and privacy, a rectangular courtyard and rooms, the use of three types of open, semi-open and closed spaces, the installation of windows and openings facing the central courtyard, not using of pond and plants in the courtyard and also the use of Tiezhdar16, and in some cases, Mazehdar17 arches. In addition, the use of local materials and conformity of architecture and structure with climate are also significant in the houses of this period.
Footnote 16: An Iranian Pattern, Special technique in the material arrangement.
As the typology defines the most fundamental differences, the spatial types of the studied houses can be distinguished based on the location of the Iwan in the northern or southern side of the central courtyard and presence or absence of a garden behind the Iwan. Therefore, Muzaffarids houses in Ardakan can be classified into two main types each of which can be divided into two subcategories according to the location of the Iwan (Table 6).
**The first type** has both a central courtyard and a garden as well as a Soffeh behind the Iwan which faces the garden. This type benefits a Soffeh with natural ventilation due to the extensive vegetation in the garden, rooms in the western and eastern parts of the Soffeh overlooking the garden with natural lighting and thus better spatial quality in the adjacent garden spaces. This type involves two subcategories: in the first subcategory, the Iwan is in the south of the central courtyard, the garden in the south of the land, and the Soffeh - which is facing the garden- is in the south of the Iwan and the north of the garden. In the second subcategory, Iwan is located in the north of the central courtyard, the garden is in the north of the land and the Soffeh is in the north of the Iwan and the south of the garden. In the **Second Type**, there is no garden and a Tanabi room or an ordinary room is often located behind the Iwan. However, sometimes there is no room. This type has a very compact plan with minimal natural lighting and natural ventilation through the small central courtyard.
Table 6 summarizes the research findings. The first and second rows indicate the position of the main Iwan and the garden as fundamental differences. The other rows are spaces present in all samples, but due to different positioning, they have caused changes in the spatial organization of the plans.
## 4 Conclusion
In this study, it is shown that the Muzaffarid houses in Ardakan have distinct and identifiable patterns that distinguish them from those of other historical eras. Main Iwan is the fundamental space of these houses that is prominent in all patterns. According to the location of the Iwan and the garden, Muzaffarid houses in Ardakan are classified into two types. Due to the existence of the garden, the first type has a larger area and a better spatial quality as well as more decorations and sometimes more varied construction techniques. These houses probably belonged to well-off families with better financial status and higher social-economic backgrounds in the Muzaffarid era. The second type has less area and less spatial
\begin{table}
\begin{tabular}{c c} \hline & Decorative element & Abundance \\ \hline & & [PERSON], \\ & & [PERSON], \\ & & [PERSON] \\ & & Houses \\ \hline & & [PERSON], \\ & & [PERSON] \\ & & Houses \\ \hline & & [PERSON], \\ & & [PERSON] \\ & & Houses \\ \hline & & [PERSON] \\ & & Houses \\ \hline & & [PERSON] \\ & & House \\ \hline & & [PERSON], \\ & & [PERSON] \\ & & Houses \\ \hline & & [PERSON] \\ & & House \\ \hline & & [PERSON], \\ & & [PERSON] \\ & & Houses \\ \hline & & [PERSON] \\ & & House \\ \hline & & [PERSON], \\ & & [PERSON] \\ & & Houses \\ \hline & & \\ \hline & & \\ \hline & & \\ \end{tabular}
\end{table}
Table 5: Decorative Features of 4 case studies of Muzaffarid Houses. Reference: Author.
quality and diversity than the first type. This type consists of fewer open and semi-open spaces, so it uses less natural ventilation and sunlight. More limited decorations and construction techniques are also observed. The analysis of the Muzzaffarid houses within the city of Ardakan conveys that not only the climatic factors but also cultural-social values have defined the housing typology or the spatial organization of the studied house. Thus, the housing evolution represents a collective development reflecting both the cultural needs as well as the various environmental constraints. These traditional houses represent a spontaneous model that refers to a humble experience of local skills and the limitation of the available local construction materials. Nevertheless, it is widely acknowledged as a distinctive example of a housing development that perfectly confronts the harsh desert climate and responds adequately to the basic needs of its users.
## References
* [PERSON] (1938) [PERSON], 1938. Tarik-e _Yazd yd Jaakada-ye Yazdlan_, Yazd, Iran.
* [PERSON] et al. (2006) [PERSON], [PERSON], [PERSON], 2006. _Courtyard housing: past, present and future_. Taylor & Francis Group, New York.
* [PERSON] et al. (1968) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 1968. _The history of Iran_. Cambridge, UK: Cambridge University Press. ISBN 9789004127562.
* [PERSON] (1966) [PERSON], 1966. The New History of Yazd. By the efforts of [PERSON]. Tehran: Iran Culture.
* [PERSON] (2007) [PERSON], 2007. _Power, politics and religion in Timurid Iran_. Cambridge, UK: Cambridge University Press. ISBN 9780521865470.
* [PERSON] (1993) [PERSON], 1993. _Yazd from the rise to the fall of Muzzaffaris_. Master thesis. History field. Faculty of Literature and Humanities, University of Tehran.
* New World Encyclopedia contributors. (2009). \"Muzzaffarids,\" New Encyclopedia. (www.newworldencyolopedia.org/index.php?ti=Me-Muzzaffarids&oldidid=915345 (accessed January 24, 2020).
* [PERSON] (1996) [PERSON], 1996. _Dictionary of Islamic architecture_. London, UK: Routledge. ISBN 9780415060844.
* [PERSON] (1999) [PERSON], 1999. _Historical Processes of the Building Landscape, Architectural Knowledge and Cultural Diversity_, ed. [PERSON], Comportments, Lausanne, Switzerland; 39-50.
* [PERSON] (2006) [PERSON], 2006. A typological perspective: the impact of cultural paradigmatic shifts on the evolution of courtyard houses in Cairo. _METUM J Fac Archit_, 23(1):41-58.
* [PERSON] et al. (2013) [PERSON], [PERSON], [PERSON], [PERSON], 2013. Building typologies identification to support risk mitigation at the urban scale Case study of the old city centre of Seixal, Portugal. _Journal of Cultural Heritage_, 14: 449-463.
* [PERSON] (1976) [PERSON] (1976). Housing by people. [PERSON], London.
* [PERSON] (1955) [PERSON]. (1955). The Architecture of Islamic Iran: The Il Khindi Period, Princeton.
* [PERSON] (2014) [PERSON], \"MOZAFARIDS\", Encyclopedia Iranica, online edition, 2014, available at [[http://www.niancaonline.org/articles/](http://www.niancaonline.org/articles/)]([http://www.niancaonline.org/articles/](http://www.niancaonline.org/articles/)) mozaffarids (accessed on January 24, 2020).
* [PERSON] et al. (2007) [PERSON], [PERSON], 2007. Muzzaffarid houses of Meybod. From the book of \"A city there is in Meybod\". By [PERSON]. Tehran: Cultural Heritage, Handicrafts and Tourism Organization of Iran, Meybod Cultural Heritage Research Institute.
* [PERSON] et al. (2013) [PERSON], [PERSON], 2013. Investigating the evolution of Iwan in traditional houses of Yazd-Ardakan plain from Muzzaffarid to Qajar era. _Soffeh Journal_. No. 62.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline \multicolumn{1}{|c|}{Iwan/Location} & \multicolumn{1}{c|}{South of Central} & \multicolumn{1}{c|}{South of Central} & \multicolumn{1}{c|}{South of Central} \\ & \multicolumn{1}{c|}{Courtyard} & \multicolumn{1}{c|}{Courtyard} & \multicolumn{1}{c|}{Courtyard} & \multicolumn{1}{c|}{Courtyard} \\ \hline Garden Yard Location & South of House & North of House & \multicolumn{1}{c|}{ } & \multicolumn{1}{c|}{ } \\ Entrance Area / Location & North of House & South of House & North of House & \multicolumn{1}{c|}{ } & \multicolumn{1}{c|}{ } \\ \cline{2-5} \cline{5-5}
|
isprs
|
TYPOLOGY OF HISTORICAL HOUSES IN MUZAFFARID ERA: CASE STUDY OF ARDAKAN CITY, YAZD, IRAN
|
M. Dormohamadi
|
https://doi.org/10.5194/isprs-archives-xliv-m-1-2020-945-2020
| 2,020
|
CC-BY
|
isprs/c65e44b6_9e26_4cd0_b740_28267121d729.md
|
Exposing and Providing Access to Indian Bioresource Information Network (IBIN) Species Occurrence Dataset as Web Service using OGC WPS Standard
[PERSON], [PERSON], [PERSON]
Indian Institute of Remote Sensing, ISRO, Dehradun, India - (kapil, sameer)@iirs.gov.in2 Birla Institute of Technology and Science, Pilani, India - [EMAIL_ADDRESS]
###### Abstract
Species occurrence data are collected by many researchers worldwide as record of species present at a specific time at some defined place as part of biological field investigation serving as primary or secondary dataset. These datasets reside in separate silos across numerous distributed systems having different formats limiting its usage to full potential. IBIN portal provides a single window for accessing myriad spatial/non-spatial data on bioseources of the country. To promote reuse of occurrence dataset among organizations in an interoperable format including support for integration across various platforms & programming languages, it is been exposed as web service using OGC Web Processing Service (WPS) standard. WPS provides standardized interface for performing online geo-processing by exposing spatial processes, algorithms and calculations thereby enabling machine to machine communication and wider usage in various scenarios (e.g. service chaining etc.). Open source ZOO-project is used for developing the 'Species Search' WPS service. WPS takes inputs as either the species name or bounding box or shapefile defining the area of interest and returns queryable OGC complaint Web Map Service (WMS) as output with specie(s) occurrences represented in grid (Skm x 5 km) format, with each grid possessing attributes like specie(s) name, family, state, medicinal detail etc. WPS process can be invoked asynchronously, enabling proper feedback regarding status of the job submitted. JavaScript based web client for consuming this service has also been developed along with custom QGIS plugin to allow potential users to access the same in GIS software for wider reusibility.
2018 IEEE International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XIII-S5, 2018 ISPRS TC V Mid-term Symposium 'Geospatial Technology - Pusel to People', 20-23 November 2018, Dehradun, India
## 1 Introduction
Species occurrence data has been collected for a long time only as physical specimens and stored in museums as natural history. Such data is collected by many researchers as it finds many applications in various fields like biogeographical studies, conservation planning, bioprospecting ([PERSON], 2005), species distribution prediction ([PERSON] et al., 2006), estimating magnitudes of animal movements ([PERSON] et al., 2018) etc. In recent times, however, museums and other agencies have spent considerable amounts to support the digitization of such data into online species occurrence databases ([PERSON] et al., 2017).
These databases are managed by different bodies, meaning that they reside in various distributed networks, and each such database has a different format for storage and retrieval of data. Further, the data collected is usually documented and organised in a extremely inconsistent and fragmented approach ([PERSON] et al., 2013). This creates a problem as separate procedures are required to gather the same data from different databases, thereby limiting the use of datasets from multiple databases to its full potential. The Indian Bioresource Information Network (IBIN) serves as a portal which networks the otherwise independent databases into a unified delivery system ([[http://ibin.gov.in/index.php/?option=com.jbin&task=](http://ibin.gov.in/index.php/?option=com.jbin&task=)]([http://ibin.gov.in/index.php/?option=com.jbin&task=](http://ibin.gov.in/index.php/?option=com.jbin&task=)) about). IBIN portal provides a single window for accessing myriad spatial/non-spatial data on bioresources of the country. This setup allows the data to be made available to a range of end users at a single end-point thereby ensuring that the data is always available in a consistent format thereby making it simple to consume.
To promote the reuse of IBIN species occurrence dataset among organizations in an interoperable format including support for integration across various platforms & programming languages, it is been exposed as web service using OGC Web Processing Service (WPS) standard. The OGC WPS provides a standardized interface for performing either simple or complex geoprocessing operation/computation online via web service from the remote host ([PERSON], 2015; [PERSON], 2007). As a result, reusability of the data in an interoperable manner is achieved, which is also platform-independent and can be consumed by multiple programming languages. This also provides the power to chain simple processes to allow for the execution of various complex processes in a variety of different contexts.
'Species Occurrence Search' WPS service takes inputs as either the species name or bounding box or shapefile defining the area of interest and returns queryable OGC complaint Web Map Service (WMS) as output with specie(s) occurrences represented in grid (Skm x 5 km) format, with each grid possessing attributes like specie(s) name, family, state, medicinal detail etc. In the following section, the reader will find the overall setup of this WPS architecture, its important features, the design and implementation of the 'Species Occurrence Search WPS' and of the JavaScript based web client and QGIS plugin which consume this WPS and use the WMS output to display results.
## 2 Web Processing Service
A Web Processing Service (WPS) is a standardized interface defined by the Open Geospatial Consortium (OGC). It is a web service which makes it possible to execute computing processes and retrieve metadata which describe their purpose and functionality. The capabilities of a WPS can be retrieved using a GetCapabilities request, details of a specific service can be obtained using a DescribeProcess request while the processes can be executed using an Execute request ([PERSON], 2015; [PERSON], 2007). Since the release of version 2.0.0, job control and monitoring operations like GetStatus, GetResult and Dismiss have also been added which are particularly useful during an asynchronous execution.
### Overall Architecture
The WPS standard forms the heart of this project. The open source ZOO-project was used to develop the 'Species Occurrence Search' WPS service.
Figure 1 shows the overall architecture of the Species Search WPS. It allows searching for species occurrence using three ways either using species name or bounding box input or using a shapefile denoting the area of interest. WPS service is not just one service, this is actually a simplified representation for three services, one of which will be chained with WPS based on the inputs. WPS service processes the inputs using which it queries the IBIN database. Once it receives the response form the database, it converts the received response into a format that is accepted by MapServer ([[https://mapserver.org/index.html](https://mapserver.org/index.html)]([https://mapserver.org/index.html](https://mapserver.org/index.html))).
ZOO-project enables to write the WPS processes in languages like Python, PHP, Java, C# and JavaScript. Here the species search WPS service is written in Python using various Python geospatial libraries like GDAL, OWSLib etc. ZOO-project also provides a capability to integrate MapServer support ([[http://www.zoo-project.org/](http://www.zoo-project.org/)]([http://www.zoo-project.org/](http://www.zoo-project.org/))). This happens in such a way that once a WPS using MapServer support is terminated, its outputs are passed to MapServer. Once MapServer gives the WMS output, that output is received by WPS species search service which makes some necessary changes to the WMS so as to enable handling GetFeaturInfo requests. This enables user to get the features associated with each grid (species search result) of the output fetched from IBIN species occurrence database. Finally, the WMS output is returned to the client which can then be used by the client to visualize the output.
### Important Features
#### 2.2.1 Interoperability
A WPS allows the processes and code to be delivered to organizations irrespective of underlying program ([PERSON] et al., 2011). This ensures that the functionality can be used by organizations in a platform-independent manner while also giving the managing body to make necessary changes and updates to the code without breaking the functionality for any of the organizations.
#### 2.2.2 Reusability
Services exposed as a WPS can be reused by organizations in multiple applications ([PERSON] et al., 2011). This means that the same functionality can be incorporated into multiple applications without having to explicitly design that functionally for each application separately, simply by importing the WPS into the application.
#### 2.2.3 Service Chaining
This is a workflow of services where for each pair of services, the second service can occur only after the first one is terminated ([PERSON] et al., 2009). This allows the creation of repeatable workflows and chunking of complex tasks into simpler blocks, each handled by a different service. Existing geospatial services like WMS or another WPS can also be incorporated into such a service chain ([PERSON] et al., 2009).
#### 2.2.4 Asynchronous execution
Processing of geospatial data often takes a long time. This can often exceed the maximum connection timeout range of Hyper Text Transfer Protocol (HTTP) servers, which is what WPS relies on ([PERSON], 2008). Therefore, it is desirable to have an asynchronous execution of the same, as it decouples the request from the response and consequently avoids wasting and draining client resources till the processing goes on at the server end. ([PERSON] and [PERSON], 2015).
Figure 1: The overall architecture of the Species Search WPS
Figure 2: Asynchronous execution sequence diagram of the WPS processrequest to fetch the result of execution from the WPS server. Figure 2 shows an asynchronous execution sequence as applicable for the Species Search WPS. The client would make an Execute request, passing the required input as either the name of a species or a bounding box or a shapefile defining the area of interest, and would be notified of a JobID for the process which has thus been initiated. The client would then constantly ping the server with GetStatus requests passing the JobID and would be notified of the status of execution as well as the percentage of completion of execution when the process is running. Finally, once the client is notified that the execution is completed, the client would retrieve the results by making a GetResult request passing the JobID, for which the WPS server would return the WMS output containing the results showing the location of species occurrence with attributes as per the input data provided.
### Choice of WPS Framework
ZOO-project was selected as the framework for the Species Search WPS. A major driving factor was the support of multiple languages in Zoo, which is not provided by other frameworks like PyWPS ([[http://pywps.org/](http://pywps.org/)]([http://pywps.org/](http://pywps.org/))) which supports Python, 52\({}^{\circ}\) North ([http://52 month.org/communities/geoprocessing/wps/](http://52 month.org/communities/geoprocessing/wps/)) which supports only Java, or GeoServer WPS ([[http://docs.goeserver.org/stable/en/user/services/wps/index.htm](http://docs.goeserver.org/stable/en/user/services/wps/index.htm)]([http://docs.goeserver.org/stable/en/user/services/wps/index.htm](http://docs.goeserver.org/stable/en/user/services/wps/index.htm)) 1) which again only supports Java. This gives flexibility to the maintaining organization to develop and publish other services in different languages as preferred by the developers. The performance of ZOO-Project is acceptable considering the tested response times, failure rates and throughput with concurrent requests. Further, among PyWPS and ZOO, frameworks which support Python, the performance of ZOO is reported to be better in all three metrics and is known to have a better support community [15].
## 3 DESIGN and Implementation
### Adding Capabilities for Handling Different Inputs
The service has been designed to take either the name of the species, a bounding box or a shapefile describing the area of interest as an input. This means that the WPS should be able to handle all these three types of inputs and process them accordingly.
To provide this functionality, the service is made to accept two inputs instead. The first input parameter, called Service_Name, asks the user for the choice of service to be executed. This defines the type of input that the user will be providing to the service. The second parameter, called Input_Data, is the parameter which accepts the name of the species or the coordinates of a bounding box or the URL of a shapefile as an input. Figure 3 shows the complete structure of WPS chaining occurring in the Species Search WPS. The Species Occurrence Search WPS validates the inputs, if the inputs are invalid, an error message is returned, otherwise one of the services - Search by species name, search by bounding box or search by shapefile, is chained with the existing WPS by passing to that service the required inputs from Input_Data. Here 'Search by species name', 'Search by Bounding Box' and 'Search by Shapefile' are the three services that form WPS as described in the overall architecture.
### Querying the IBIN Database according to the Input
This part of the design uses selective chaining of processes. Based on the type of inputs passed, a separate process is executed which processes the inputs as required, makes the appropriate request to IBIN database, and receives the response from the database. This response contains data of all the locations where some species are found if the input was the name of a species, or all the species which were found within a bounding box or an area of interest and their locations. Further, available information about each species is also part of the response, like family, medicinal value etc. This response is in raw format which must be processed so that it can be returned to the client is a useful format.
### Generating WMS Output
For a client, it would be useful if the data was returned in a format that represented all the data graphically instead of raw data which would require processing by the client for finding useful details from the output. This is where generating a WMS output comes in. A WMS output would return to the client a raster layer which can be overlaid onto a map. The Species Search WPS returns the location of species as 5 km x 5 km grids. The data associated with each grid can be accessed by passing the coordinates of some point in the grid as parameters in a GetFeatureInfo request to the WMS server. This removes all spatial processing load from the client, except displaying the WMS layer, and gives the data in a graphical format.
To make the WMS output, ZOO-project provides support for integrating MapServer with the WPS. This integration makes it possible to pass some data to MapServer for the generation of a WMS output. This output is not directly capable of handling GetFeatureInfo requests. To add this capability, the WPS was configured to make the necessary changes before publishing the output to the client. This ensures that the client can always fetch all the associated data at any point by making the corresponding GetFeatureInfo request to the WMS server, which the client can identify using the URL of the WMS output received by the client.
### Developing Clients for Consuming the WPS
While the WPS provides the server-side functionalities which could be incorporated into multiple applications to suit the needs of organizations, it was imperative that some clients that can consume the WPS were developed. This was necessary so that and users who wish to gather the results could use the available clients, instead of manually making the request to the WPS and handling the WMS outputs. Consequently, a web client and a custom QGIS plugin were developed.
Figure 3: Species Occurrence Search WPS Service
#### 3.4.1 Web Client
Zoo-project provides boilerplate JavaScript code which is capable of handling WPS operations, both synchronous and asynchronous. The web client is built using this boilerplate code as a base (figure 4). It has been developed to make all requests asynchronously, and the user is notified of the progress via a progress bar which is updated with the response of each GetStatus request that is made. Leaflet ([[https://leaflejs.com/](https://leaflejs.com/)]([https://leaflejs.com/](https://leaflejs.com/))), an open-source JavaScript library for interactive maps has been used to render maps and the WMS output (showing the location of species occurence in grid format). Further client has functionality to make GetFeatureInfo requests whenever someone clicked on the WMS output layer. The results (showing attribute of species found) are then shown as a popup as shown in figure 5.
#### 3.4.2 Custom QGIS Plugin
The QGIS plugin for QGIS 2.18 was also developed as software like QGIS are commonly used for interpreting spatial data (figure 6).
The plugin uses OWSLib ([[https://github.com/geopython/OWSLib](https://github.com/geopython/OWSLib)]([https://github.com/geopython/OWSLib](https://github.com/geopython/OWSLib))) at its core, to make it capable of handling asynchronous execution. Once a request is made, the user is notified of a running process by a progress bar being displayed as a message. Upon successful completion of WPS process, the WMS layer denoting the output is added to the workspace as a layer, whose name the user is prompted to supply before the layer is added. The details of species associated with each grid can be seen by using the 'Identify Features' tool (figure 7).
## 4 Conclusion
The WPS for species occurrence search provides a way to access information of occurrence data of all the species of the country through one unified place. This provides data in a consistent manner to all users, thus eliminating issues of different format of outputs from different databases. Further, the data is supplied in a reusable and interoperable way. This ensures that the service can cater to the needs of the maximum number of users by surpassing any restrictions that may be imposed by platform, therefore, supporting extensively in further studies relating to species occurrence.
## References
* [PERSON] et al. (2017) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2017. Use of Online Species Occurrence Databases in Published Research since 2010, in: _Proceedings of TDIWG_. p. e20518. [[https://doi.org/10.3897/dwgproceedings.1.20518](https://doi.org/10.3897/dwgproceedings.1.20518)]([https://doi.org/10.3897/dwgproceedings.1.20518](https://doi.org/10.3897/dwgproceedings.1.20518))
* [PERSON] (2015) [PERSON], 2015. OGC WPS 2.0.2 Interface Standard: Corrigendum 2. _Open Geospatial Consortium_. [[https://doi.org/http://www.opengeospatial.org/](https://doi.org/http://www.opengeospatial.org/)]([https://doi.org/http://www.opengeospatial.org/](https://doi.org/http://www.opengeospatial.org/))
* [PERSON] (2008) [PERSON], 2008. Oge Web Processing Service and It's Usage. _GIS Ostraux_ 2008 27, 1-12. [[https://doi.org/10.1007/springerreference.62558](https://doi.org/10.1007/springerreference.62558)]([https://doi.org/10.1007/springerreference.62558](https://doi.org/10.1007/springerreference.62558))
* [PERSON] (2005) [PERSON], 2005. Uses of primary species-occurrence data, version 1.0 _Report for the Global Biodiversity Information Facility_
* [PERSON] et al. (2013) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2013. eHealth, a multi-purpose Web Processing Service for ecological modeling. _Environmental Modelling and Software_[[https://doi.org/10.1016/j.envsoft.2012.11.005](https://doi.org/10.1016/j.envsoft.2012.11.005)]([https://doi.org/10.1016/j.envsoft.2012.11.005](https://doi.org/10.1016/j.envsoft.2012.11.005))
* [PERSON] et al. (2018) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON]. [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON],
Figure 4: WPS Web Client- Species Occurrence Search by Bounding Box
Figure 5: WPS Web Client- Output of Species Occurrence Search using user defined Bounding Box
Figure 6: QGIS Plugin for IBIN Species Occurrence WPS Service
Figure 7: Executing IBIN Species Search WPS Service using Custom QGIS PluginK., [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2006. Novel methods improve prediction of species' distributions from occurrence data. _Ecography_ (Cop.). [[https://doi.org/10.1111/j.2006.0906-7590.04596.x](https://doi.org/10.1111/j.2006.0906-7590.04596.x)]([https://doi.org/10.1111/j.2006.0906-7590.04596.x](https://doi.org/10.1111/j.2006.0906-7590.04596.x))
* [PERSON] et al. (2009) [PERSON], [PERSON], [PERSON], [PERSON], 2009. Geospatial Services Chaining with Web Processing Service, in: _International Symposium on Intelligent Information Systems and Applications (IISA'09)_.
* [PERSON] and [PERSON] (2015) [PERSON], [PERSON], [PERSON], 2015. Evaluation of Web Processing Service Frameworks. _OSGeo J._ 14, 29-42.
* [PERSON] (2007) [PERSON], 2007. OpenGIS @ Web Processing Service. _Open Geospatial Consortium_. [[https://doi.org/citeulike-article-id:8653309](https://doi.org/citeulike-article-id:8653309)]([https://doi.org/citeulike-article-id:8653309](https://doi.org/citeulike-article-id:8653309))
* [PERSON] et al. (2018) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2018. Species occurrence data reflect the magnitude of animal movements better than the proximity of animal space use: _Ecosphere_. [[https://doi.org/10.1002/ecs2.2112](https://doi.org/10.1002/ecs2.2112)]([https://doi.org/10.1002/ecs2.2112](https://doi.org/10.1002/ecs2.2112))
* [PERSON] et al. (2011) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2011. Data processing using Web Processing Service orchestration within a Spatial Data Infrastructure, in: _Proceedings of the 34 th International Symposium on Remote Sensing of Environment_.
* [PERSON] and [PERSON] (2015) [PERSON], [PERSON], 2015. Asynchronous Geospatial Processing: An Event-Driven Push-Based Architecture for the OGC Web Processing Service. _Transactions in GIS_[[https://doi.org/10.1111/tgs.12104](https://doi.org/10.1111/tgs.12104)]([https://doi.org/10.1111/tgs.12104](https://doi.org/10.1111/tgs.12104))
|
isprs
|
EXPOSING AND PROVIDING ACCESS TO INDIAN BIORESOURCE INFORMATION NETWORK (IBIN) SPECIES OCCURRENCE DATASET AS WEB SERVICE USING OGC WPS STANDARD
|
K. Oberai, M. Jasoria, S. Saran
|
https://doi.org/10.5194/isprs-archives-xlii-5-781-2018
| 2,018
|
CC-BY
|
isprs/400e3111_bc9b_4244_a5e5_6719557f24ce.md
|
Object-Based and Supervised Detection of Potholes and Cracks from the Pavement Images Acquired by UAV
[PERSON]
1 Institute of Remote Sensing and Geographic Information System, Peking University, 5 Summer Palace Road, Beijing 100871, China - [EMAIL_ADDRESS]
[PERSON]
1 School of Computer Science, Shihezi Univeristy, Shihezi, Xinjiang 832002, China - [EMAIL_ADDRESS]
[PERSON]
1 Institute of Remote Sensing and Geographic Information System, Peking University, 5 Summer Palace Road, Beijing 100871, China - [EMAIL_ADDRESS]
[PERSON]
1 Institute of Remote Sensing and Geographic Information System, Peking University, 5 Summer Palace Road, Beijing 100871, China - [EMAIL_ADDRESS]
utilizes radar pulses to image the subsurface profile to detect subsurface objects, changes in material properties, voids and cracks, which is very convenient and accurate([PERSON] & [PERSON], 2007).
However, some limited abilities and issues occurred in the previous studies. For instance, most of studies just only focused on one kind of distress, such as cracks or potholes, whereas more than one type of damages could exist on the pavement at the same time. The mobile vehicle integrated with PMS also has a potential risk for the traffic safety and is unable to cover the full pavement of different lanes simultaneously. Given these above problems, the pavement images acquired by Unmanned Aerial Vehicle (UAV) were used to implement the study, and four supervised learning algorithms including K-Nearest Neighbour (KNN), Support Vector Machine (SVM), Artificial Neural Network, and Random Forest (RF) were evaluated in terms of the performance on the detection of potholes and cracks from the UAV images of road pavement.
## 2 Data and Methods
### Image Acquisition and Segmentation
The asphalt pavement located in rural area of Shihezi City, Xinjiang/China was selected in the study. According to the field investigation, the majority of the pavement was in poor condition with a variety of severe pavement distresses, such as potholes and cracks. A multispectral camera Micro-Miniature Multiple Camera Array System (MCA), designed by Tetracam Inc. USA, was mounted on a fixed-wing UAV to capture the pavement images. Theoretically, MCA configures six bands spanning from blue to near infrared, i.e. Blue, Green, Red and three near infrared channels ([PERSON] & [PERSON], 2012). However, the images captured by the three infrared channels do not have sufficient exposure, which results in a lower contrast between the non-distressed and distressed pavement. Therefore, only the images in RGB channels were chosen in this study. The UAV flew along the road at 30 meters above the ground level, in which case one pixel corresponded to about 13.54*13.54 mm area in the pavement. In total, 126 pavement images were acquired with 70% of overlap between two sequential images. However, there is no white traffic line in those above pavement images, which is also one of the common objects on the road surface. In order to increase the generalization of this study, a sample UAV pavement image provided by Arisight Company ([[https://demo.airsight.de/uav/index_en.html](https://demo.airsight.de/uav/index_en.html)]([https://demo.airsight.de/uav/index_en.html](https://demo.airsight.de/uav/index_en.html))) was used to extract the white traffic lines. This pavement image also has three RGB channels and was captured by a digital camera with a higher resolution (1 pixel = 5 mm).
Given the high resolution of pavement images, Multiresolution Segmentation (MS) algorithm integrated in eCognition Developer Software 9.0 was used to extract the objects of potholes and cracks from pavement images. MS identifies single image objects of one pixel in size and merges them with their neighbours based on relative homogeneity criteria. This homogeneity criterion is a combination of spectral and shape criteria, which is calculated through a comprehensive scale parameter. Higher values for the scale parameter result in larger image objects, smaller values in smaller ones([PERSON] al., 2003). However, it is difficult to choose one appropriate scale parameter to extract intact potholes and cracks simultaneously. The contrast, one texture feature calculated based on the Gray-level Co-occurrence Matrix (GLCM)([PERSON] et al., 2008), was selected to measure the variations within the distress and non-distressed areas in the study. The formula for calculation of contrast feature is:
\[Contrast=\sum_{i=0}^{N-1}\sum_{j=0}^{N-1}P_{i,j}(i-j)^{2} \tag{1}\]
Where, \(i\), \(j\) are the row and column number of GLCM respectively. \(P(i,j)\) is the value in the cell \(i,j\). \(N\) is the number of rows or columns. In order to obtain an intact pothole object, one merge action was conducted based on the contrast values of objects over the initial segmentation resulting from the lower scale parameter. Namely, all image objects, the contrast values of which exceed the given threshold, will be merged into one image object.
### Dataset Preparation and Feature Selection
Sufficient sample data are necessary for training and validating machine-learning algorithms ([PERSON] et al., 2016). Three classes were defined in this study, i.e. pothole, crack and non-distressed pavement that includes damage-free pavement, with white and yellow traffic lines. However, there are limited numbers of potholes and cracks on the pavement we studied. In comparison of two sequential images, it can be observed that the pixel values in the same location has a bias because of the illumination differences caused by the different solar incident angle. Consequently, this will lead to some degree of difference between the segmentation results of the same target derived from different images. Hence, dataset preparation will be implemented based on three rules: (a) 126 pavement images are segmented individually following the procedure mentioned in section 2.1; (b) the same target in two of sequential images are thought to be of two different objects; (c) white traffic line samples were collected from the image provided by Arisight Company. Finally, 1430 samples containing 221 potholes, 678 cracks and 531 non-distressed pavements with 299 damage-free pavements, 122 yellow and 110 white traffic lines respectively were collected.
Feature selection has a great influence on the performance of learning algorithms. Reasonable numbers and types of features are able to increase the accuracy of algorithm while decreasing the computation time([PERSON] & [PERSON], 2009). Generally, three types of image features can be extracted from digital images, i.e. spectral feature, geometry feature and texture feature. In this study, based on the prior knowledge of feature value distribution of every kind of image objects, 18 features containing 6 spectral features, 6 geometry ones and 6 GLCM texture ones were introduced to train and validate the learning algorithms (Table1). Furthermore, considering the different value distribution of each feature, feature normalization was implemented based on the equation (2) below.
\[\text{X}_{Norm}=\frac{X-\text{X}_{\text{min}}}{x_{\text{max}}-\text{X}_{\text{ min}}} \tag{2}\]
Where \(\text{X}_{Norm}\) is the normalized feature vector. \(X_{\text{max}}\), \(X_{\text{min}}\) are the maximum and minimum values of feature X respectively. Consequently, values of all features are in the same range from 0 to 1, which should speed up the convergence efficiency of learning algorithms. In order to verify the capabilities of each type of feature towards the detection of potholes and cracks, six combinations of three types of features were introduced to each classification algorithm, i.e. spectral(C1); geometry(C2); texture(C3) features; spectral and geometry features(C4); geometry and texture features(C5); spectral, geometry and texture features(C6).
### Detectors of Potholes and Cracks
Four supervised classifiers including K-Nearest Neighbours (KNN), Support Vector Machine (SVM), Artificial Neural Network (ANN) and Random Forest were selected to detect the potholes and cracks in this study. In order to examine the predictive accuracy of learning algorithms, and to protect against overfitting, the 1430 samples were randomly divided into 5 folds. For each fold, a model is trained using the out-of-fold observations, and the classification accuracy of the model is calculated using in-fold data. Finally, the average classification accuracy over all folds is an indicator of the model performance. Exceptionally, the performance of Random Forest would be validated using the Out-of-Bag (OOB) Error ([PERSON], 2001) instead of the above n-fold validation procedure. All the algorithms are run on one PC configured with Core i7-6700 HQ CPU@ 2.6 GHZ, Nvidia Quadro M1000M GPU and 16 GB RAM. The running time of different models was also recorded as one of important indicator of the algorithm performance.
#### 2.3.1 K-Nearest Neighbours
K-Nearest Neighbours (KNN) is one type of instance-based and lazy learning algorithm, which determines the class of observation that represents the maximum of its neighbours([PERSON] & [PERSON], 2007). The parameter K determines the number of neighbours considered. The distance between the observation and samples could be defined by each of their Euclidean distances, Minkowski distance etc. Generally, the class of observation will be assigned directly based on the class of majority neighbours. However, KNN might bias the outcome when the number of nearest neighbours in one class is less than other relatively distant neighbours that belong to another class. Therefore, distance weighting is always introduced to refine the classification result of KNN. Namely, the nearer neighbours will contribute more to the outcome than the more distant ones. A common weighting scheme consists in giving each of K neighbours a weight of 1/d\({}^{2}\), where d is the distance of observation with respect to its neighbours. Among these parameters, the parameter K has a great impact on the accuracy of KNN. In this study, we present a series of K to verify how many of K would best towards this application. The Minkowski distance and weighting scheme of squared inverse of distance were selected for the experiment.
#### 2.3.2 Support Vector Machine
Support Vector Machine (SVM) is a classification system derived from statistical learning theory. It separates the classes with a decision surface that maximizes the margin between the classes. The surface is often called the optimal hyperplane, and the data points closest to the hyperplane are called support vectors. The support vectors are the critical elements of the training set. SVM is one of non-probabilistic binary classifiers to assign new examples to one category or the other. It means one SVM can only solve the two-class problems. SVM can also perform the multiclass problems by combining several binary SVM classifiers together based on the logic classification procedure of one-vs-one or one-vs-all. One special feature of SVM is the kernel function, which is introduced to deal with non-linear classification problems. The kernel function can map the original examples into a high-dimensional feature space, in which case the non-linear classification problem will become the linear case. There are several types of kernel model with different performance for different applications, such as linear kernel, polynomial kernel, Gaussian kernel etc. In the study, the performance of four types of kernel models on detection of potholes and cracks were evaluated, i.e. linear, quadratic, cubic and Gaussian.
\begin{table}
\begin{tabular}{c c c c c c} \hline \begin{tabular}{c} Categ \\ \(\text{ory}\) \\ \end{tabular} & Name & \
\begin{tabular}{c} Categor \\ y \\ \end{tabular} & Name &
\begin{tabular}{c} Categ \\ \(\text{ory}\) \\ \end{tabular} & Name \\ \hline \multicolumn{5}{c}{Mean} \\ \multicolumn{5}{c}{of} \\ \multicolumn{5}{c}{Red} \\ \multicolumn{5}{c}{Mean} \\ \multicolumn{5}{c}{of} \\ \multicolumn{5}{c}{Green} \\ \multicolumn{5}{c}{Mean} \\ \multicolumn{5}{c}{of} \\ \multicolumn{5}{c}{Green} \\ \multicolumn{5}{c}{Mean} \\ \multicolumn{5}{c}{of} \\ \multicolumn{5}{c}{Geen} \\ \multicolumn{5}{c}{Blue} \\ \multicolumn{5}{c}{Geen} \\ \multicolumn{5}{c}{\(\text{al}\)} \\ \multicolumn{5}{c}{STD} \\ \multicolumn{5}{c}{of} \\ \multicolumn{5}{c}{Red} \\ \multicolumn{5}{c}{StD} \\ \multicolumn{5}{c}{Green} \\ \multicolumn{5}{c}{Green} \\ \multicolumn{5}{c}{StD} \\ \multicolumn{5}{c}{Green} \\ \multicolumn{5}{c}{STD} \\ \multicolumn{5}{c}{StD} \\ \multicolumn{5}{c}{Green} \\ \multicolumn{5}{c}{StD} \\ \multicolumn{5}{c}{Blue} \\ \multicolumn{5}{c}{Blue} \\ \multicolumn{5}{c}{Blue} \\ \multicolumn{5}{c}{Blue} \\ \multicolumn{5}{c}{STD} \\ \multicolumn{5}{c}{Blue} \\ \multicolumn{5}{c}{STD: Standard Deviation} \\ \multicolumn{5}{c}{} \\ \multicolumn{5}{c}{} \\ \end{tabular}
\end{table}
Table 1 Selected Feature Set
#### 2.3.3 Artificial Neural Network
Artificial Neural Network mimic the way human brain solves problems with a large number of neurons([PERSON] & [PERSON], 2010). ANN is composed typically of three kinds of layers, i.e. the input layer, the hidden layer and the output layer. Every layer comprises a certain number of nodes similar to the neurons in the brain. The number of nodes in the input layer is determined by the number of features in the example data, while the number of output classes decides the number of nodes in the output layer. The number of hidden layers and associated nodes could vary for different applications. Moreover, every node corresponds to a kind of activation function which defines the output of that node given a set of inputs. Sigmoid, Softmax, Rectified Linear unit (ReLU) are commonly used in ANN. Which of them should be used depends on the objective of application. Back propagation is one widely used training procedure for ANN to adjust the weights and bias between the nodes. In this study, a three-layer feed-forward network with one input layer, one Sigmoid hidden layer and one Softmax output layer was constructed to classify the potholes and cracks. The network will be trained with the conjugate gradient method to minimize the difference between the output node activation and the output. In order to find out the appropriate number of nodes in the hidden layer for pavement distress detection, a series of numbers from 1 to 10 was evaluated based on the accuracy of classification result.
#### 2.3.4 Random Forest
Random Forest (RF) is one member of ensemble learning algorithms, which combine a certain number of decision tree classifiers together as a forest to predict the class of new examples ([PERSON], 2001). Every tree in the forest is trained with a subset training set, which is resampled from the original training dataset. The resampling is implemented with replacement and follows the bootstrap sampling procedure, i.e. the number of subset examples is the same as the original examples. In addition to the resampling of training examples for every tree, the features used to find the best split at each node of tree are resampled from the original feature set as well. The class of new examples is predicted by every tree in the forest, and is assigned based on a majority vote of them. The number of trees has a significant effect on the computation time of RF. As a result, a series of evaluation for what size of forest will perform best on pavement distress detection was conducted in this study.
## 3 Results and Discussion
Classification accuracy and computational time are selected as the two indicators of the performance of four learning algorithms. Classification accuracy is defined as the ratio of the number of successfully classified and total samples. Figure 1 illustrates the classification accuracy of KNN trained and validated using different settings of K and six groups of features. The accuracy of all models has a slight increase first and then decreases gradually while increasing K. It can be observed that the model trained with the combination (C6) of spectral features, geometric and textural features always performed best with the highest accuracy, while both the individual set of spectral or geometric features always presented almost similar performance with lower accuracy. Moreover, the figure presents that the individual textural feature set contributes more to the accuracy of KNN among the three types of features (Figure 1(a)). Figure 1(b) is the running time variation of different KNN models and indicates that the running time has no significant fluctuation over increasing the K for every feature combination. In general, the more features were used, the more time taken for KNN. The model with combination of C6 cost the most time while it can achieve the highest accuracy. Figure 1(c) shows the relationship between running time and classification accuracy of the best performance of each of six feature combinations. In order to make a compromise between the time and accuracy, K equals 4 and feature combination C5 containing the geometric and textural features were the best choice, which can result in an overall accuracy of 98.81% and 0.65s running time.(Table 2).
Figure 2 indicates the performance of SVM configured with different types of kernel functions and six feature combinations. Figure 2(a) shows that the SVM with linear kernel presented a lower classification accuracy when it is trained and validated only using either spectral features or geometry features individually. Along with introducing texture features or more types of features, the four kinds of SVM models (linear, quadratic, cubic, Gaussian) almost performed similarly on feature combination C3, C4, C5 and C6, and the highest accuracy was acquired by using three types of features together. Figure 2(b) indicates the running time by different SVM models. It can be seen that the SVM with polynomial kernels (Quadratic and Cubic) cost most time on the feature sets of C1, C2, C3. For C4, C5, and C6, all types of SVM models performed similarly on the running
Figure 3 presents the variation of classification accuracy and running time of ANN with respect to different numbers of neurons in the hidden layer. Specifically, when the number of hidden neurons was set to one, it means that only one abstract feature in hidden layer was used to classify the objects, which was not sufficient to distinguish between the pavement and distresses (cracks and potholes). Moreover, it took the most time to train and validate ANN in this case. With increasing the number of hidden neurons, the classification accuracy could benefit a lot from the more abstract features learned by ANN,
Figure 1: (a) The classification accuracy and (b) running time of KNN with respect to different K, and (c) the relationship between running time and accuracy of the best performance of each of six feature combinations
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline & Class & \multicolumn{3}{c}{Predicted Class} & Accuracy \\ & Crack & Pothole & Non-distressed & \\ \hline & Crack & 670 & 1 & 7 & 98.82\% \\ True Class & Pothole & 0 & 221 & 0 & 100\% \\ & Non-distressed & 5 & 2 & 524 & 98.68\% \\ Reliability & 99.26\% & 98.66\% & 98.68\% & 98.95\% (OA) \\ \hline OA: Overall Accuracy & & & & & \\ \end{tabular}
\end{table}
Table 3: Confusion Matrix of SVML-C6 and the running time decreased generally (Figure 3(b)). Figure 3 (a) shows that the ANN models with more than one type of features (C4, C5, and C6) and two more hidden neurons could always result in a higher accuracy. It also can be observed that when the number of hidden neurons was set over two, the classification accuracy did not change so much. Taking account of the running time as illustrated by Figure 3(c), the ANN with 12 hidden neurons and feature combination C4 was the best model to classify the pavement and distresses with the overall accuracy 98.81% and the corresponding running time was 0.35s. (Table 4).
Figure 4 shows the performance of RF with different number of trees in the forest. Obviously, the accuracy of RF maintained increasing along with the growth of quantity of trees until a flat trend. The feature combinations with one more type of features (C4, C5, C6) performed best and similarly when the number of trees in forest exceeded about eight. Figure 4(b) shows the running time of RF and demonstrates that the RF with feature combination C1 always cost most time compared with other feature combination. Moreover, there is a positive correlation between the trees and running time. As Figure 4(c) shows, the RF with 18 trees in the forest was the best model to detect the pavement and distresses when using the feature combination C4 (Table 4). The calculation time was only 0.09s.
\begin{table}
\begin{tabular}{c c c c c c} \hline & \multicolumn{3}{c}{Predicted Class} & Accuracy \\ & Class & Crack & Pothole & Non-distressed & \\ \hline \multirow{2}{*}{True Class} & Crack & 671 & 1 & 6 & 98.96\% \\ & Pothole & 0 & 220 & 1 & 99.54\% \\ \hline \end{tabular} This contribution has been peer-reviewed.
[[https://doi.org/10.5194/sprs-archives-XLI-4-W4-209-2017](https://doi.org/10.5194/sprs-archives-XLI-4-W4-209-2017)]([https://doi.org/10.5194/sprs-archives-XLI-4-W4-209-2017](https://doi.org/10.5194/sprs-archives-XLI-4-W4-209-2017)) © Authors 2017. CC BY 4.0 License.
\end{table}
Table 4: Confusion Matrix of ANN12-C4
Figure 2: (a) The classification accuracy and (b) running time of SVM over six feature combinations and four types of kernel function, i.e. linear, quadratic, cubic and Gaussian; (c) the relationship between running time and classification accuracy of the best performance of six feature combinations
## 4 Conclusion
Remote sensing technology as a non-destructive method for road surface inspection has been widely used in road departments nowadays. UAV is one flexible platform that can be configured with different kinds of remote sensing sensors to monitor the pavement condition. Compared with the conventional vehicle-based PMS system, the UAV remote sensing system can acquire the full pavement images of different lanes simultaneously and does not have significant impact on the normal traffic. Moreover, benefit from the full coverage of the pavement, different kinds of pavement distresses can be extracted from UAV images at the same time. In this study, a set of digital pavement images acquired by UAV and four popular learning algorithms (KNN, SVM, ANN, RF) were used to identify the road surface damages. It can be concluded that each kind of learning algorithms when given a specific set of parameters and features can achieve a high classification accuracy (over 98%) while using less computational time. Finally, taking account of the classification accuracy and running time together, four best models for each kind of learning algorithms were recommended, which all have the best performance on the detection of pavement potholes and cracks. It includes the KNN with K being 4 and feature combination of geometric and textural features, the SVM with linear kernel and feature combination of spectral, geometric and textural features, the ANN with 12 nodes in hidden layer and feature combination of spectral and geometric features, the RF with 18 trees and feature combination of spectral and geometric features. Among the four best models, the RF could get the best performance with a higher classification accuracy and minimum running time. In the future, more pavement images acquired by UAV should be used to further evaluate the performance of these best models on the detection of potholes and cracks. Other kinds of remote sensing data including LiDAR and Radar by UAV also have a great potential ability in the pavement condition monitoring. Additionally, other advanced learning algorithms could also be introduced into the pavement distresses detection, such as convolutional neural networks.
## Acknowledgements
This study was financially supported by two grants from the National Natural Science Foundation of China (No. 41571331) and from Xinjiang Production and Construction Corps (No. 2016 AB001).
Figure 4: (a) the classification accuracy and (b) running time of Random Forest over a series of numbers of trees; (c) the relationship between running time and classification accuracy of the best performance of six feature combinations
## References
* [PERSON] (2001) [PERSON], 2001. \"Random forests\". _Machine learning_, 45(1), 5-32.
* [PERSON] et al. (2016) [PERSON], [PERSON], & [PERSON], 2016. \"Detection of cracks in Paved Road Surface Using Laser Scan Image Data\". _International Archives of the Photogrammetry, Remote Sensing & Spatial Information Sciences_, XLI-B1:559-562
* [PERSON] et al. (2003) [PERSON], [PERSON], [PERSON], & [PERSON], 2003. \"Image segmentation for the purpose of object-based classification\". _IEEE International Geoscience and Remote Sensing Symposium 2013_, 3:2039-2041.
* [PERSON] et al. (2008) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], & [PERSON], 2008. \"The pothole patrol: using a mobile sensor network for road surface monitoring\". _The Proceedings of the 6 th international conference on Mobile systems, applications, and services_, June 17-20, 2008, Breckenridge, Colorado, USA.
* [PERSON] et al. (1994) [PERSON], [PERSON], & [PERSON], 1994. Modern pavement management, Krieger Publishing, 1994.
* [PERSON] et al. (1986) [PERSON], [PERSON], [PERSON], [PERSON], & [PERSON], 1986. \"Pavement Condition Index (PCI) for Flexible Pavements\". _Defects_, 1986.
* [PERSON] & [PERSON] (2012) [PERSON], & [PERSON], 2012. \"Sensor correction of a 6-band multispectral imaging sensor for UAV remote sensing\". _Remote Sensing_, 4(5), 1462-1493.
* [PERSON] et al. (2015) [PERSON], [PERSON], [PERSON], [PERSON], & [PERSON], 2015. \"A review on computer vision based defect detection and condition assessment of concrete and asphalt civil infrastructure\". _Advanced Engineering Informatics_, 29(2), 196-210.
* [PERSON] & [PERSON] (2007) [PERSON], & [PERSON], 2007. \"Accuracy of pavement thicknesses estimation using different ground penetrating radar analysis approaches\". _NDT & E International_, 40(2), 147-157. doi:10.1016/j.ndeint.2006.09.001
* [PERSON] et al. (2015) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], & [PERSON], 2015. \"Monitoring asphalt pavement damages using remote sensing techniques\". _The Third International Conference on Remote Sensing and Geoinformation of the Environment (RSCy2015)_, 95350S (June 19, 2015); doi:10.1117/12.2195702.
* [PERSON] et al. (2016) [PERSON], [PERSON], & [PERSON], 2016. \"Comparison of Supervised Classification Techniques for Vision-Based Payement Crack Detection\". _The Transportation Research Board 95 th Annual Meeting_, January 13-17, 2013 Washington, DC, pp 119-127.
* [PERSON] & [PERSON] (2009) [PERSON], & [PERSON], 2009. \"Supervised crack detection and classification in images of road pavement flexible surfaces\". _INTECH_, 2009, 100(8), doi: 10.5772/7448.
* [PERSON] et al. (2017) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], & [PERSON], 2017. \"Mapping asphalt pavement aging and condition using multiple endmember spectral mixture analysis in Beijing, China\". _Journal of Applied Remote Sensing_, 11(1), 016003-016003. doi:10.1117/1.JRS.11.016003
* [PERSON] & [PERSON] (2010) [PERSON], & [PERSON], 2010. \"Automatic asphalt pavement crack detection and classification using neural networks\". _12 th Biennial Baltic Electronics Conference_, Estonia, 329(2):345-348.
* [PERSON] et al. (2015) [PERSON] [PERSON], [PERSON], [PERSON], & [PERSON], 2015. \"Review of remote sensing methodologies for pavement management and assessment\". _European Transport Research Review_, 7(2), 1-19. doi:10.1007/s12544-015-0156-6
* [PERSON] et al. (2008) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2008. \"Textural and local spatial statistics for the object-oriented classification of urban areas using high resolution imagery\". _International Journal of Remote Sensing_, 29(11), 3105-3117. doi:10.1080/01431160701469016
* [PERSON] et al. (2008) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], & [PERSON], 2008. \"Automatic recognition of pavement surface crack based on Bp neural network\". _International Conference on the Computer and Electrical Engineering_, 2008:19-21.
* [PERSON] & [PERSON] (2007) [PERSON], [PERSON], & [PERSON] 2007. \"ML-KNN: A lazy learning approach to multi-label learning\". _Pattern recognition_, 40(7), 2038-2048.
* [PERSON] & [PERSON] (2014) [PERSON], [PERSON], & [PERSON], 2014. \"Use of Low-cost Remote Sensing for Infrastructure Management\". _The Construction Research Congress 2014_, Atlanta, 1299-1308.
|
isprs
|
OBJECT-BASED AND SUPERVISED DETECTION OF POTHOLES AND CRACKS FROM THE PAVEMENT IMAGES ACQUIRED BY UAV
|
Y. Pan, X. Zhang, M. Sun, Q. Zhao
|
https://doi.org/10.5194/isprs-archives-xlii-4-w4-209-2017
| 2,017
|
CC-BY
|
isprs/168e40b7_bb77_4dd6_9c5a_b39cbebc2f05.md
|
# Automatic Generation of High-Resolution Thermal 3D Building Envelope Models exploiting UAV Imagery
[PERSON]
[PERSON]
[PERSON]
Stephan Nebiker
Institute of Geomatics, FHNW University of Applied Sciences and Arts Northwestern Switzerland, 4132 Muttenz, Switzerland - (elia.ferrari, jonas.meyer, stephan.nebiker)@fhhw.ch, [EMAIL_ADDRESS]
###### Abstract
Buildings are major contributors to global energy consumption, with the thermal performance of their envelopes playing a crucial role. Detecting thermal bridges, which compromise insulation, is essential for energy efficiency. To efficiently detect thermal bridges, thermal infrared (TIR) imagery is widely used through visual inspections, more recently by exploiting sensors mounted on unmanned aerial vehicles (UAVs). While RGB images have been extensively used in Structure-from-Motion and Multi-View Stereo processes, applying these techniques to TIR images presents challenges due to lower resolution and inconsistent colour spaces. To overcome the challenges posed by TIR imagery, approaches from different fields investigated the integration of TIR images with other data to support the alignment. Our approach improves upon these methods by using a DJI Mavic 3 Enterprise Thermal UAV to collect RGB and TIR datasets simultaneously. Our guided image alignment and camera rig estimation approach accounts for unknown camera calibration, misalignment, and lever arm parameters, ensuring robust alignment of TIR images with a total error of 5 pixels. With this approach, the geometric accuracy of the resulting point cloud reached an RMSE of 0.13 m. Finally, thermal calibration values collected on site were applied to correct the thermal images, improving temperature value accuracy for 3D model texturing with a temperature deviation of 2.8 \({}^{\circ}\)C. The developed method requires no prior camera calibration, TIR image pre-processing, or ground control points, permitting a complete automation of the process.
TIR, Thermal inspection, 3D model, 3D reconstruction, Camera alignment, UAV. +
Footnote †: journal: Physics Letters B
## 1 Introduction
### Motivation
Buildings make significant contributions to global energy consumption and greenhouse gas emissions. The thermal performance of the building envelope has a significant impact on overall energy consumption. To efficiently detect thermal bridges (weak points in the thermal insulation), thermal infrared (TIR) imagery is widely used through visual inspections ([PERSON] et al., 2020). However, often true to scale data, such as point clouds or 3D models of building envelopes, enriched with thermal information, are needed for locating heat leaks for for planning refurbishments of buildings. Compared to RGB images, TIR images show lower geometric and radiometric resolution, lower dynamic range ([PERSON] et al., 2013), are affected by unsharp definitions of discontinuities and small details (e.g. blurred edges) ([PERSON] et al., 2018). Hence, structure from motion (sfM) and multi view stereo (MVS) based processing alone is often insufficient in terms of spatial resolution and accuracy ([PERSON] and [PERSON], 2019). Various strategies focusing on the fusion of TIR images with RGB images ([PERSON] et al., 2022), point clouds ([PERSON] et al., 2023; [PERSON] and [PERSON], 2018; [PERSON] et al., 2020; [PERSON] et al., 2019) or 3D models ([PERSON] et al., 2011; [PERSON] and [PERSON], 2017) from other sources address these geometrical issues. However, such approaches impair their applicability due the complexity of tasks such as TIR camera calibration, operation of high-end data acquisition systems, TIR image pre-processing, and image alignment, as well as the dependence on available and current data when relying on external sources, like 3D city models.
Early research with experimental multi-head sensor systems for Unmanned Aerial Vehicle (UAV) systems date back more than 15 years ([PERSON] et al., 2008). With the advent of commercial off-the-shelf UAV systems with multi-sensor heads, RGB and TIR images can be easily acquired simultaneously ([PERSON], 2024). If the UAV additionally includes an RTK-GNSS module, accurate pose priors can be determined. Such UAV systems allow for the efficient capturing of buildings in just a few minutes. SfM based 3D model generation, especially alignment of TIR images, however, still poses major challenges and requires elaborate workflows ([PERSON] et al., 2022).
In this paper we propose a fully automated process, based on the SfM software Agisoft Metashape (Agisoft LLC, 2023) to create thermal 3D building envelope models. We only use simultaneously captured RGB and TIR images from a UAV and accurate pose priors, without the need for GCPs, TIR image pre-processing or known camera calibration. Our main contributions are:
* A guided image alignment and rig estimation process
* A fully automated process from raw RGB and TIR to a 3D thermal building envelope model
* A qualitative and quantitative analysis of first results
### Related Work
The number of applications and studies using UAVs has rapidly increased in recent years due to several reasons. UAVs have become more affordable and dependable, and they can be equipped with different sensors and used for multiple applications. In particular, the use of TIR sensors has been researched in a number of works, ranging from agriculture and forestry ([PERSON] et al., 2017), heritage asset documentation ([PERSON] et al., 2022) to buildings and infrastructures thermal analysis ([PERSON] et al., 2020; [PERSON] et al., 2022).
Standard procedures for RGB imagery processing in mapping and 3D modelling now employ modern SfM-MVS workflows ([PERSON] and [PERSON], 2021). In recent years, these approaches have been applied to thermal infrared images to obtain products such as point clouds or 3D models of buildings with thermal information ([PERSON] et al., 2022).
In the approach of [PERSON] et al. (2020), they collected imagery with a UAV equipped with a professional TIR camera, focusing on the 3D reconstruction directly from TIR images. They subsequently applied a thermal correction to the imagery using the temperature deviation of aluminium foil employed for the targets as reference. The analysis with the collected reference measurements showed an absolute temperature accuracy of 5\({}^{\circ}\)C. Large scale approaches cannot rely on single targets of aluminium foil. Therefore, [PERSON] et al. (2024) applied the Thermal Urban Road Normalization algorithm developed by [PERSON] et al. (2014) to interpolate temperature deviation within a scene and normalize TIR imagery, based on the assumption of roads as pseudo invariant objects. Conversely, [PERSON] et al. (2018) exploited TIR imagery collected with a plane to generate a large scale thermal orthomosaic of a city. In their approach they added a pre-processing step, evaluating different radiometric enhancement methods. This improved the effectiveness of the process with SfM algorithms, generating more tie points in the sparse cloud and a slightly higher density of the dense cloud.
[PERSON] et al. (2022) highlighted the challenges of 3D reconstructions directly from TIR imagery, which have a limited field of view (FoV) as well as low spatial resolution and therefore generate incomplete point clouds. In addition, the image alignment is made more arduous by the lack of distinctive features.
To face the previously mentioned challenges associated with TIR images, different approaches have been developed. [PERSON] and [PERSON] (2017) exploited an existing 3D building model and its uncertainties for a co-registration of TIR imagery, refining the exterior orientation parameters of the camera. Other studies instead exploited data collected by newly developed low-cost multi-sensor UAVs, capable of simultaneously capturing visible and thermal infrared images. This offers significant advantages for a combined RGB and TIR image alignment with an improved geometric quality of the 3D reconstruction ([PERSON] et al., 2022; [PERSON] et al., 2023). In these studies, dual-sensor datasets were used to investigate the advantages of an integrated reconstruction approach, evaluating them on single buildings or facades. [PERSON] et al. (2023) implemented a rectification method for TIR images to allow the fusion with RGB images and validated it on a building facade with a fusion error of 5.7 pixels between RGB and TIR images. In contrast, the approach of [PERSON] et al. (2022) implemented a three-step workflow to align the images, using the estimated poses of RGB images to improve the TIR image alignment. However, this approach assumes that no misalignment or lever arm between the two cameras persists. The resulting average checkpoint RMSE showed values about 40% higher than the Ground Sampling Distance (GSD) and in one case inferior values than the 3D reconstruction with pure TIR images. Similarly, [PERSON] et al. (2017) adopted the estimated poses of RGB images to enhance the TIR image alignment. Additionally, they also adopted the previously estimated camera parameters from a camera calibration. In contrast, [PERSON] et al. (2020) showed that thermal 3D models benefit from the combination of RGB and TIR images. They performed image alignment (RGB only) and projected thermal information to 3D points by known pre-calibrated lever arm and misalignment values. Despite the added value of a pre-calibration of the thermal camera, low-cost dual-sensors systems are usually unstable, and the internal camera parameters can therefore vary from flight to flight. Thus, for UAV-based data collection a self-calibration is preferable ([PERSON] et al., 2023).
[PERSON] et al. (2021) presented an approach to determine absolute temperature values from a point cloud generated using RGB images. In their investigations, the data collected with a dual-sensor system have been combined, removing the image distortions and projecting the TIR images on the RGB images via transformation matrices. In this way the temperature values can be assigned to the corresponding points of the point cloud. Accordingly, [PERSON] et al. (2022) exploit dual-sensor data to create a point cloud augmented with thermal information. They adopted a fixed camera system to remove TIR images onto RGB images exploiting the known relative rotation and translation between the two cameras. The method is computationally expensive and requires an additional visibility test for a correct interpretation. As an alternative, they generated four-channel images adding the TIR information to the RGB images and recomputing the image alignment with the new images. This resulted in faster image processing. However, it resulted in lower accuracies of the checkpoints in comparison to the first method.
A combined evaluation of RGB and TIR images should enable a more robust and accurate generation of thermal 3D building models. Such geometrically precise 3D models would allow a much better assessment of the entire building envelope and planning of renovation measures.
## 2 Materials and Methods
### Instruments
In this paper, a DJI Mavic 3 Enterprise Thermal (M3T) is used for data acquisition. The UAV is equipped with a multi-sensor camera head, which can capture RGB and TIR images simultaneously. The thermal data is recorded with an accuracy of 2 \({}^{\circ}\)C and an image resolution of 0.3 MP, while the wide-angle camera could capture 48 MP RGB images. However, the UAV system limits the wide-angle camera resolution to 12 MP if RGB and TIR data are collected simultaneously. As [PERSON] and [PERSON] (2023) showed, another limitation is caused by the electronic shutter of the UAV, which directly influences the flight plan and speed due to the rolling shutter effect. However, in comparison to the UAV used by [PERSON] and [PERSON] (2023), a DJI Phantom 4 Pro, the electronic shutter of the M3T performs a full sensor readout two and half times faster and does not need post-processing compensation in low-speed flight mode, as demonstrated by [PERSON] (2024). In addition, the UAV was equipped with a real-time kinematic (RTK) module, which determines image positions with an accuracy of up to 1 \(\text{ cm}+1\) ppm horizontally and 1.5 \(\text{ cm}+1\) ppm vertically (DJI, 2023).
To validate absolute temperature values, reference measurements were carried out with a FLIR E40 thermal camera, which enabled independent thermal measurements with an accuracy of 2 \({}^{\circ}\)C (FLIR, 2011).
Along with reference temperature measurements, checkpoints have been distributed to enable an accuracy analysis of the results. The targets were measured using a combination of RTK GNSS Rover and total station, allowing the data to be collected within a standard deviation of \(\div\)5 cm.
### Study Area
The study area is a detached building in a rural environment. The building is inhabited and heated. It is equipped with a photovoltaic installation on the roof and, according to construction documentation, is poorly insulated. The surrounding terrain has a gentle uniform slope in one direction and just a few elements that obstruct the view, such as trees. As illustrated in Figure 1, seven checkpoints, five on the ground and two on the facade, have been materialised, to provide information about the accuracy of the generated products during analysis.
No GCPs were placed to maintain fully automated data processing without operator intervention. All checkpoints have been realised using special targets made of aluminium foil, which has a low thermal emissivity. As shown in Figure 2, the targets are clearly visible in the RGB and TIR images. The coordinates of all targets on the ground have been determined by multiple RTK-GNSS measurements, while additional checkpoints on the facades have been measured with a total station. All checkpoints are expected to have a standard deviation of less than 5 cm.
### Data Acquisition
The study area was captured with a UAV DJI M3T, described in section 2.1. To completely capture the building and its facades, which are partly obscured by caves and balconies, 362 RGB and TIR images were captured in three configurations: nadir, oblique and close-range (Figure 3). Nadir and oblique images were captured from 30 metres above ground and an angle of 45\({}^{\circ}\) was chosen for the oblique configuration. The close-range images were recorded with an average object distance of 5 metres. Table 1 shows the number of images per configuration and the average GSD for RGB and TIR in each configuration.
The flight mission was conducted in mid-December 2023 in the early morning to ensure minimum solar irradiation on the one hand, and good quality RGB images on the other. At the start of the mission, a thermal sensor calibration was performed to establish the optimal temperature range and emissivity of the measured object. In addition, environmental values, such as distance from the object, humidity, emissivity and reflected temperature, have been registered to automatically adjust the thermal measurements in post processing. The maximum speed of the UAV was set to 3 m/s resulting in a data acquisition time of around 16 minutes. To enable georeferencing, the RTK module of the UAV was used to record precise image poses and to facilitate the image alignment process. At the same time, a FLIR E40 thermal imaging camera was used to collect reference data before and after each flight to analyse the absolute temperature values in order to ensure an average value valid for the entire flight. The measurements were taken at a distance of around 2 meters from the building facade, on window frames or thermal bridges of the construction, which were also visible on UAV images. To this end, three points of interest on the north, west and east facade have been identified and measured.
### Guided Image Alignment and Camera Rig Estimation
To create a combined 3D model with thermal information, both RGB and TIR images need to be aligned precisely. Standard image alignment procedures usually consider each image separately. While the alignment of RGB imagery with sufficient overlap has become a standard procedure, several works show that the alignment of TIR images still poses challenges caused by low geometric and radiometric resolution as well as small fields of views ([PERSON], 2021; [PERSON] et al., 2022; [PERSON] et al., 2020). Furthermore, the establishment of matches between RGB and TIR images is likely to fail due to radiometric differences and lack of distinctive features on the TIR images ([PERSON] et al., 2022). To increase the stability of the image alignment process we define a multi-sensor rig where the RGB camera is the primary sensor and the TIR camera the secondary sensor defined by its relative orientation (lever-arm \(T_{RGB}^{RGB}\) and misalignment \(R_{RGB}^{RGB}\)) with respect to the primary sensor. Since the calibration parameters of both, the RGB and TIR camera are unknown, a self-calibration of both cameras is performed. However, the translational aspect of the relative orientation correlates strongly with the TIR camera's focal length and principal point, meaning that these camera parameters can also be described by shifting the relative orientation accordingly ([PERSON], 2010).
Initial tests showed that the alignment of the RGB images was successful, but the alignment of the TIR images failed when estimating all unknown parameters during bundle adjustment. According to [PERSON] and [PERSON] (2006) we assume that the
\begin{table}
\begin{tabular}{|l|l|l|l|l|} \hline configuration & GSD & mm & Total & images & forward / \\ \cline{2-5} & RGB & TIR & RGB / TIR & side overlap \\ \hline nadir & 10.8 & 39.6 & 60 & 80\% / 75\% \\ \hline oblique & \multirow{2}{*}{15.3} & \multirow{2}{*}{56.0} & \multirow{2}{*}{172} & \multirow{2}{*}{70\% / 75\%} \\ (centre pixel) & & & & \\ \hline close range & 1.8 & 6.6 & 130 & – / 75\% \\ \hline \end{tabular}
\end{table}
Table 1: Ground sampling distances and number of images per image type and configuration.
Figure 1: Study object and checkpoint distribution.
Figure 3: Configuration of different flight missions (nadir, oblique and close range) to completely capture the building of interest.
Figure 2: Example of the point materialization. Classical target enhanced with aluminium foil (1) and a cross made of aluminium foil (2) in an RGB (left) and TIR image (right).
bundle adjustment process fails because of the strong correlation of the lever-arm and the intrinsic camera parameters in combination with a weak network geometry due to few key point correspondences in the TIR images.
Under the assumption of accurate image pose priors (0.05 m and 10\({}^{\circ}\) standard deviation for position and rotation components respectively) we developed a guided image alignment and camera rig estimation process consisting of three main steps:
1. **Initial image alignment** Projection centres of RGB and TIR sensors are identical (lever-arm \(T_{TR}^{RGB}=(0,0,0)^{T}\)). Estimation of misalignment and all intrinsic camera parameters of the RGB and TIR sensor except for the focal length of the TIR sensor, which is fixed to the initial value obtained from metadata, during the bundle adjustment.
2. **Camera optimization and lever-arm estimation** Introduction of the lever-arm \(T_{TR}^{RGB}=(-0.02,0,0)^{T}\) (manually measured values in metres) with a standard deviation of 0.001 m. Misalignment and camera intrinsics parameters are treated as in the previous step.
3. **Camera optimization and TIR camera calibration** Estimation of all values of the TIR camera.
For step b) and c) the estimated values of the previous steps are used as approximate values for the current optimization step. The introduced standard deviation for the camera poses and the lever-arm are left unchanged. Additionally, rolling shutter compensation is disabled due to the low-speed flight mode used and the previously described shutter speed of the DJI M3T (section 2.1).
### Temperature Values Correction and Conversion
The TIR images collected with the M3T are saved as radiometric JPEGs, which is a binary format just used to display the coloured TIR images. With the purpose of enabling further processing, the temperature information encoded in the radiometric JPEG files needs to be converted as it cannot be read directly. With the help of the DJI Thermal SDK, the data have been corrected using the collected object distance, humidity, emissivity and reflected temperature and then saved as standard raw files with just one channel containing the corrected absolute temperature values for each pixel (DJI, 2022).
### Process Automation
In the last part of our approach, we focused on the automation of the whole workflow depicted in Figure 4, integrating all the processing steps within a script implemented in Python. To this end, we exploited the Application Programming Interface (API) of the Agisoft Metashape (Aigisoft LLC, 2023) software to automate the image alignment and camera rig estimation, as well as for dense point cloud and 3D model generation. The integration of the DJI Thermal SDK in the script allowed the automation of thermal image conversion and correction. Finally, the converted images were replaced and automatically processed for the texture generation, again with the Agisoft Metashape API. This enabled the fully automated generation of a realistic and high-resolution thermal 3D building envelope model with absolute temperature values.
### Point Cloud and Texture Evaluation
After defining the camera rig and image alignment strategy, both geometrical accuracy and absolute temperature accuracy of the results were evaluated. For the geometrical analysis, the dense point cloud resulting from the aligned RGB images has been used as reference for a comparison with both, the point cloud from the aligned TIR images and the point cloud resulting from the combined alignment of TIR and RGB images. To this end, the same point cloud section has been extracted from all three point clouds and the deviation from the reference has been computed.
Due to the lower geometric resolution, the image alignment of TIR images is significantly less accurate than the alignment of RGB images ([PERSON] et al., 2018). In order to investigate the impact on the final product, a 3D mesh was generated using in turn the combined point cloud, RGB and TIR images, and the point cloud resulting from RGB images only. The resulting meshes were textured with TIR images aligned with the combined approach and with RGB images from standard alignment, respectively. The measured checkpoints attached to the facades were used to calculate the deviation of the TIR texture from the RGB texture (reference).
Finally, the evaluation of the absolute temperature values aims to compare the generated thermal texture generated from the corrected TIR images with the reference values of the FILR 40 thermal camera. For this analysis, the measurements of the three reference points on the building's facades were compared with the measurements on the UAV's TIR imagery. In this context, the thermal values have been calculated from the average of six UAV's images, whereby the average temperature of all values within a radius of six pixels was used for each image.
Figure 4: Workflow of our automated photogrammetric process.
## 3 Experiments and Results
### Guided Image Alignment and Rig Estimation
Our proposed image alignment and camera rig estimation process is evaluated by measuring all seven checkpoints (Figure 1) within the aligned images. This is performed for both TIR and RGB images independently. The residuals in checkpoint coordinates are calculated after each of the three steps of the proposed approach: a) initial camera alignment (with identical projection centres for TIR an RGB images, and fixed focal length of TIR camera); b) camera optimization and lever-arm estimation; c) camera optimization and TIR camera calibration. Table 2 shows the average residuals for all seven checkpoints for TIR and RGB images respectively. The residuals are provided as 2D and 3D coordinate differences in object space and as pixel differences in image space.
Table 2 shows that the residuals of the checkpoints measured in TIR images are lower after our proposed guided image alignment and camera-rig estimation process. However, the estimation of the lever-arm without optimizing the focal length of the TIR camera results in slightly higher residuals than assuming that both projection centres are identical. In contrast, the residuals of checkpoints measured in RGB images do not change after the initial alignment process, meaning that the estimation of the lever-arm, misalignment and intrinsic camera parameters have no influence on the RGB image poses.
### Point Cloud and Texture Evaluation
#### 3.2.1 Point Cloud Evaluation
As introduced in section 2.7, the three resulting point clouds, from RGB images only, from TIR images only and from the combination of RGB and TIR images, were compared. In this comparison the first one served as reference to examine the deviations of the latter two.
The geometric accuracy analysis of the combined point cloud showed that around 40% of the points show less than 2 cm deviation from the reference. 90% of the points show differences of up to 5 cm from the reference, resulting in a total RMSE of 0.13 m. Similarly, the comparison of the point cloud generated with aligned TIR images only with the reference point cloud resulted in a deviation of 2 cm for 45% of the points. However, only 77% of the points showed a deviation from the reference point cloud of less than 5 cm, resulting in a higher total RMSE of 0.19 m.
The point cloud resulting from the combined method showed a remarkably high point density of 2102 pts/m\({}^{2}\), comparable to the point cloud resulting from pure RGB image alignment with 2154 pts/m\({}^{2}\). In contrast, the point cloud resulting from TIR images only has significantly higher noise and about one third of the density of the reference point cloud (642 pts/m\({}^{2}\)).
#### 3.2.2 Texture Evaluation
The investigation showed deviations of the TIR texture from the RGB texture averaging 5.5 cm in position (2D) and averaging 3 cm in height. The worst value has been encountered at the check point CIKP5, placed on the west facade, with a difference from the reference (RGB texture) of 10 cm in position. As shown in section 3.1 the accuracy of the TIR image alignment with residuals of 4.9 pixels is significantly lower than that of the RGB images with 0.95 pixels (Table 2). Considering the GSD of the close-range images with 6.6 mm for TIR images and 1.8 mm for RGB images (Table 1), the obtained differences can be explained by the uncertainty of the image alignment and the resulting deviations in the object space.
In addition, as shown in Figure 5, a visual comparison of the two textures was carried out. It showed that the structure of the photovoltaic system and the different insulation layers between the ground floor and the first floor are easily recognisable and can be correctly located in the 3D model. Despite a maximum deviation of 10 cm, the overall models showed satisfactory results for the application of a thermal 3D model.
### Absolute Temperature Values
In the analysis of the accuracy of absolute temperature values, the reference temperature values of the FLIR E40 have been compared with those calculated from the M3T imagery, as described in section 2.5. The differences summarized in Table 3, show no systematic deviation and lie within the simple standard deviation of the instrument accuracy of 2.8 \({}^{\circ}\)C.
### Discussion
The combined processing of RGB and TIR images is beneficial for image alignment as it can address the challenges posed by TIR imagery. This can avoid using radiometric enhancement methods to improve TIR image alignment and prevent applying two transformations to obtain the original temperature values ([PERSON] et al., 2018). An additional challenge arises from the differing ideal conditions required for capturing RGB and TIR data simultaneously. RGB image quality depends on adequate natural light, requiring daylight conditions to capture high
\begin{table}
\begin{tabular}{|l|c|c|c|c|} \hline Imagery & \multicolumn{2}{c|}{TIR} & \multicolumn{2}{c|}{RGB} \\ \hline \multirow{3}{*}{Residuals} & object space & image & object space & image \\ & 2D / 3D & space & 2D / 3D & space \\ & [m] & [pix] & [m] & [pix] \\ \hline (a) & 0.101 / 0.245 & 5.29 & 0.030 / 0.055 & 0.95 \\ (b) & 0.101 / 0.251 & 5.34 & 0.030 / 0.055 & 0.95 \\ (c) & 0.095 / 0.232 & 4.92 & 0.030 / 0.055 & 0.95 \\ \hline \end{tabular}
\end{table}
Table 2: Residuals of checkpoint observations in TIR and RGB images after each step of the proposed image alignment process.
Figure 5: Sections of RGB texture (in the background) and thermal texture (circular, in the centre): roof view (left), east facade (right).
\begin{table}
\begin{tabular}{|l|c|c|c|} \hline Reference & FLIR E40 & UAV M3T & \(\Delta\)T \\ point & (reference) & (actual) & (reference \\ & Average / & Average / & - actual) \\ & Std. Dev. & Std. Dev. & \\ \hline East facade & 2.1 \({}^{\circ}\)C \(\pm\) 0.8 & 0 \({}^{\circ}\)C \(\pm\) 0.1 & 1.3 \({}^{\circ}\)C \\ \hline West facade & -3.4 \({}^{\circ}\)C \(\pm\) 0.7 & -0.6 \({}^{\circ}\)C \(\pm\) 0.3 & -2.8 \({}^{\circ}\)C \\ \hline North facade & -0.5 \({}^{\circ}\)C \(\pm\) 0.3 & -3.2 \({}^{\circ}\)C \(\pm\) 0.2 & 2.7 \({}^{\circ}\)C \\ \hline \end{tabular}
\end{table}
Table 3: Absolute temperature differences between average reference value (FLIR E40 thermal cameras) and average value of UAV (M3T) TIR imagery.
contrast, sharp visuals. In contrast, TIR imaging benefits from minimal solar radiation, as thermal readings become more reliable and less influenced by external heat sources, such as direct sunlight. This discrepancy between real and ideal conditions presents a trade-off when collecting integrated RGB-TIR data, as optimisation for one sensor may compromise the other. This reduces the possibilities for data collection around twilight periods when ambient light is low enough to avoid substantial heat from solar radiation, yet sufficient for capturing usable RGB images.
A further consideration is the choice of sensor used for flight planning, which directly affects either the TIR or RGB imagery. In this study, the RGB sensor was selected for mission planning. While this ensured adequate overlap and coverage in RGB images, it led to lower overlap in TIR images due to the narrower FoV and consequently impacting image alignment accuracy and point cloud density of TIR images. On the other hand, using the TIR sensor for mission planning results in capturing more RGB images, leading to increased data volume and processing demands.
Integrating all processing steps within a single script automates the entire workflow, including guided image alignment and camera rig estimation, absolute temperature value correction and conversion, 3D modelling and texturing, thus eliminating time-consuming manual interactions. The automated pipeline leverages RTK-GMSS positioning instead of GCPs, simplifying and speeding up data acquisition and processing. However, the reliance on RTK-GMSS technology brings with it a dependency for the automation process, which could limit the application to data coming from UAVs equipped with RTK-GMSS only.
The proposed guided image alignment and camera rig estimation steps provided an alignment accuracy of approximately 5 pixels, slightly higher than the 3-4 pixels accuracy obtained in GCP-supported workflows ([PERSON] et al., 2018). While adequate for generating visually accurate thermal 3D models, this variance suggests that the use of GCPs could further benefit the alignment accuracy. However, special coded targets should be employed, with the intention of guaranteeing the complete automation of the process.
The data fusion allows for generating a thermal point cloud with the higher accuracy and density of a RGB point cloud. The inclusion of both RGB and TIR imagery enhances the quality of the 3D point cloud, not just generating a point cloud density like the standard RGB point cloud, but also reaching a maximal deviation from the reference point cloud of 5 cm for 90% of the points. The total RMSE of 0.13 m can be considered as good in comparison with results obtained with other methods, which yielded a RMSE between 0.2 and 0.22 m ([PERSON] et al., 2020). However, using the RGB point cloud as reference a dependency between TIR and RGB data is introduced, since both data have been collected simultaneously with the same platform. A feasible alternative for the geometric analysis consists in an independent measurement system, such as a terrestrial laser scanner, which can provide additional and unrelated information about the geometry accuracy of the approach.
The deviation between RGB and TIR textures presented in section 3.3 most likely result from a combination of inaccurate parameters of the external orientation of the TIR sensor and stitching errors due to the low geometric resolution and dynamic range of TIR imagery. Possibilities to further increase the accuracy of the thermal textures could lie in a more precise approach for estimating the relative orientation (misalignment and lever arm) and a compromise in flight planning to ensure greater overlap of the TIR images.
The evaluation of the absolute temperature values in section 3.4 supports the method's applicability for thermal assessment, as it allows for effective representation of temperature variation across building surfaces with an accuracy higher than other approaches ([PERSON] et al., 2022; [PERSON] et al., 2020). It also demonstrates that, the temperature correction with calibration values is a crucial component for obtaining accurate thermal values.
## 5 Conclusions and Outlook
In this study, we presented an automated workflow for creating 3D thermal models using data from a dual-camera UAV system equipped with an RTK-GMSS module. By leveraging simultaneously captured RGB and TIR images with multi-camera head and eliminating the need for GCPs, we achieved a fully automated process. The implemented three-step method for a guided image alignment and camera rig estimation enabled a combined processing of TIR and RGB images, resulting in alignment residuals of approximately 5 pixels at measured checkpoints. Furthermore, from these aligned images, a TIR point cloud with an enhanced density was generated, with a deviation from the reference under 5 cm for 90% of the points and a total RMSE of 0.13 m. Finally, the temperature corrections applied to TIR images produced thermal textures with a standard deviation of 2.8 \({}^{\circ}\)C. While differences in optimal conditions for capturing RGB and TIR imagery pose limitations for simultaneous data collection, the integration of TIR and RGB datasets enhances the visualization and analysis of building thermal performance.
Future research could focus on refining alignment accuracy by developing automated GCPs measurement methods with coded targets and improving the estimation of camera calibration, lever-arm and misalignment of multi-sensor heads. Additionally, validating this approach with an independent system, such as terrestrial laser scanning, could provide further insights into its geometric accuracy. Incorporating facade-mounted reference sensors during data acquisition could also enhance the reliability of absolute temperature values. These advancements would further improve the method's accuracy.
## References
* Agisoft LLC (2023) Agisoft LLC, 2023. Agisoft Metashape Professional, Version 2.1.2.
* [PERSON] (2021) [PERSON], 2021. Photogrammetric analysis of multispectral and thermal close-range images. Mersin Photogrammetry Journal, 3, 29-36. doi:10.53093/mephoj.919916.
* [PERSON] et al. (2011) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2011. Mapping Infrared Data on Terrestrial Laser Scanning 3D Models of Buildings. Remote Sensing, 3(9), 1847-1870. doi:10.3390/rs3091847.
* [PERSON] and [PERSON] (2023) [PERSON], [PERSON], [PERSON], 2023. Experimental Tests and Simulations on Correction Models for the Rolling Shutter Effect in UAV Photogrammetry. Remote Sensing, 15(9), 2391. doi:10.3390/rs15092391.
* [PERSON] et al. (2018) [PERSON], [PERSON], [PERSON], 2018. Structure from Motion for aerial thermal imagery at city scale: Pre-processing, camera calibration, accuracy assessment. ISPRS Journal of Photogrammetry and Remote Sensing, 146, 320-333. doi:10.1016/j.ispspsps.2018.10.002.
** [PERSON] (2021) [PERSON], [PERSON], 2021. Accuracy of Unmanned Aerial Systems Photogrammetry and Structure from Motion in Surveying and Mapping: A Review. J Indian Soc Remote Sens, 49(8), 1997-2017. doi:10.1007/s12524-021-01366-x.
* DJI Mavic 3 Enterprise
- DJI Enterprise. [[https://enterprise.dij.com/mavic-3-enterprise/photo](https://enterprise.dij.com/mavic-3-enterprise/photo)]([https://enterprise.dij.com/mavic-3-enterprise/photo](https://enterprise.dij.com/mavic-3-enterprise/photo)) (10 May 2024).
* DJI (2022) DJI, 2022. DJI Thermal SDK, Version 1.4.
* Dlesk and Vach (2019) [PERSON], [PERSON], [PERSON], K., 2019. POINT CLOUD GENERATION OF A BUILDING FROM CLOSE RANGE THERMAL IMAGES. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLII-5/W2, 29-33. doi:10.5194/isps-archives-XLII-5-W2-29-2019.
* Dlesk and Vach (2022) [PERSON], [PERSON], [PERSON], K., [PERSON], K., 2022. Photogrammetric Co-Processing of Thermal Infrared Images and RGB Images. Sensors, 22(4), 1655. doi:10.3390/s22041655.
* [PERSON] et al. (2023) [PERSON], [PERSON], [PERSON], 2023. Multi-modal image matching to colorize a SLAM based point cloud with arbitrary data from a thermal camera. ISPRS Open Journal of Photogrammetry and Remote Sensing, 9, 100041. doi.org/10.1016/j.opbto.2023.100041.
* FLIR (2011) FLIR, 2011. FLIR E-Serie. [[https://www.flir-infrademaras.de/WebRoot/Store12/Shops/61587589/4](https://www.flir-infrademaras.de/WebRoot/Store12/Shops/61587589/4) DFB9]([https://www.flir-infrademaras.de/WebRoot/Store12/Shops/61587589/4](https://www.flir-infrademaras.de/WebRoot/Store12/Shops/61587589/4) DFB9) F3E/2D90/6279C31D/CO829 BA/0647/FLIR_series.pdf (16 October 2024).
* [PERSON] and [PERSON] (2018) [PERSON], [PERSON] [PERSON], 2018. Mobile thermal mapping for matching of infrared images with 3D building models and 3D point clouds. Quantitative InfraRed Thermography Journal, 1-19. doi:10.1080/17686733.2018.1455129.
* [PERSON] and [PERSON] (2017) [PERSON], [PERSON], [PERSON], 2017. Camera pose refinement by matching uncertain 3D building models with thermal infrared image sequences for high quality texture extraction. ISPRS Journal of Photogrammetry and Remote Sensing, 132, 33-47. doi:10.1016/j.ispsrspirs.2017.08.006.
* [PERSON] et al. (2020) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], R.K., 2020. A photogrammetric approach to fusing natural colour and thermal infrared UAS imagery in 3D point cloud generation. International Journal of Remote Sensing, 41(1), 211-237. doi:10.1080/01431161.2019.1641241.
* [PERSON] et al. (2019) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2019. Fusion of thermal imagery with point clouds for building facade thermal attribute mapping. ISPRS Journal of Photogrammetry and Remote Sensing, 151, 162-175. doi:10.1016/j.isprispris.2019.03.010.
* [PERSON] et al. (2021) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2021. An optimized approach for generating dense thermal point clouds from UAV-imagery. ISPRS Journal of Photogrammetry and Remote Sensing, 182, 78-95. doi:10.1016/j.isprisps.2021.09.022.
* [PERSON] (2010) [PERSON], 2010. Erweiterte Verfahren zur geometrischen Kamerkalibrierung in der Nahbereichsphotogrammetrie (Habilitation Thesis). Deutsche Geodatische Kommission, Reihe C, Nr. 645.
* [PERSON] et al. (2013) [PERSON], [PERSON] [PERSON], [PERSON] [PERSON], 2013. Geometric Calibration of Thermographic Cameras, in: [PERSON], [PERSON] (Eds.), Thermal Infrared Remote Sensing, Remote Sensing and Digital Image Processing. Springer Netherlands, Dordrecht, pp. 27-42. doi:10.1007/978-94-007-6639-6_2.
* [PERSON] et al. (2017) [PERSON], [PERSON], [PERSON] [PERSON], 2017. Optimizing the Processing of UAV-Based Thermal Imagery. Remote Sensing, 9(5), 476. doi:10.3390/rs9050476.
* Opportunities for Very High Resolution Airborne Remote Sensing. Int Arch Photogram Reme Sens Spatial Inform Sci, 37.
* [PERSON] et al. (2022) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2022. SFM-BASED 3D RECONSTRUCTION OF HERITAGE ASSETS USING UAV THERMAL IMAGES. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, XLIII-B1-2022, 399-406. doi:10.5194/ispsr-archives-XLIII-B1-2022-399-2022.
* [PERSON] et al. (2014) [PERSON], [PERSON], [PERSON] [PERSON], [PERSON] [PERSON], 2014. Transforming Image-Objects into Multiscale Fields: A GEOBIA Approach to Mitigate Urban Microclimatic Variability within H-Res Thermal Infrared Airborne Flight-Lines. Remote Sensing, 6(10), 9435-9457. doi:10.3390/rs6109435.
* [PERSON] et al. (2022) [PERSON], [PERSON], [PERSON], 2022. Thermal point clouds of buildings: A review. Energy and Buildings, 274, 112425. doi:10.1016/j.enbuild.2022.112425.
* [PERSON] and [PERSON] (2006) [PERSON], [PERSON] [PERSON], 2006. Digital camera calibration methods: Considerations and comparisons. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 36(5), 266-272. doi:10.3929/ETHZ-B-000158067.
* [PERSON] et al. (2024) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2024. Application, Adaption and Validation of the Thermal Urban Road Normalization Algorithm in a European City. Workshop on Visualisation in Environmental Sciences (EnviVis). doi:10.2312/ENVIRVIS.20241135.
* [PERSON] (2024) [PERSON], 2024. Multi-sensor data fusion for autonomous flight of unmanned aerial vehicles in complex flight environments. Drone Syst. Appl., 12, 1-12. doi:10.1139/dsa-2024-0005.
* [PERSON] et al. (2023) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2023. Thermal-textured BIM generation for building energy audit with UAV image fusion and histogram-based enhancement. Energy and Buildings, 301, 113710. doi:10.1016/j.enbuild.2023.113710.
* [PERSON] et al. (2020) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2020. A Thermal Performance Detection Method for Building Envelope Based on 3D Model Generated by UAV Thermal Imagery. Energies 13, 6677. doi:10.3390/en13246677.
* [PERSON] (2024) [PERSON], 2024. DJI Mavic 3 has no mechanical shutter. Sensor readout speed explained. [[https://www.pix-pro.com/blog/dij-mavic-3-rolling-shutter](https://www.pix-pro.com/blog/dij-mavic-3-rolling-shutter)]([https://www.pix-pro.com/blog/dij-mavic-3-rolling-shutter](https://www.pix-pro.com/blog/dij-mavic-3-rolling-shutter)) (11 May 2024).
|
isprs
|
Automatic Generation of High-Resolution Thermal 3D Building Envelope Models exploiting UAV Imagery
|
Elia Ferrari, Jonas Meyer, Andreas Koch, Stephan Nebiker
|
https://doi.org/10.5194/isprs-archives-xlviii-2-w8-2024-155-2024
| 2,024
|
CC-BY
|
isprs/cda5b564_221b_433e_a437_ff90192d32f9.md
| "Observation of Submarine Volcanic Activities\n\nat\n\nthe Kaitoku Seamount and the Hukutoku-Oka-no-(...TRUNCATED)
|
isprs
|
Introduction
|
Elena Faur, Ciprian Speranza
|
https://doi.org/10.55245/energeia.2025.01
| 2,025
|
CC-BY
|
isprs/73df24d3_c8b9_4db6_9f25_fa4859092bf9.md
| "Estimating Industrial Structure Changes in China using DMSP - OLS Night-Time Light Data During 1999(...TRUNCATED)
|
isprs
| "ESTIMATING INDUSTRIAL STRUCTURE CHANGES IN CHINA USING DMSP – OLS NIGHT-TIME LIGHT DATA DURING 19(...TRUNCATED)
|
X. Han, G. Tana, K. Qin, H. Letu
|
https://doi.org/10.5194/isprs-archives-xlii-3-w5-9-2018
| 2,018
|
CC-BY
|
isprs/2a37dff4_f91b_4833_8561_dbeab40b3ba1.md
| "Effect of the distances of public health facilities from the nearest major roads on skilled deliver(...TRUNCATED)
|
isprs
| "EFFECT OF THE DISTANCES OF PUBLIC HEALTH FACILITIES FROM THE NEAREST MAJOR ROADS ON SKILLED DELIVER(...TRUNCATED)
|
K. Rombosia, E. Oele, N. Rangara, J. Mwaura, B. Mitto, E. Ondura, D. Onyango, C. Akoth
|
https://doi.org/10.5194/isprs-archives-xlii-4-w14-203-2019
| 2,019
|
CC-BY
|
isprs/6b213e32_aa20_4efa_8164_a0ca7f646da2.md
| "# Automatic Extraction of Building Outline from High Resolution Aerial Imagery\n\n[PERSON] Wang\n\n(...TRUNCATED)
|
isprs
|
AUTOMATIC EXTRACTION OF BUILDING OUTLINE FROM HIGH RESOLUTION AERIAL IMAGERY
|
Yandong Wang
|
https://doi.org/10.5194/isprs-archives-xli-b3-419-2016
| 2,016
|
CC-BY
|
EVE-Corpus
EVE-Corpus is a large-scale, cleaned, and anonymized text corpus of Earth Observation (EO) documents formatted in Markdown.
It is designed to support research in EO and domain-specific LLM training.
The corpus contains 270k Markdown files with a total size of 15 GB, sourced from peer-reviewed journals, EO websites and scientific repositories.
Dataset Features
- file_path (
string): the file path of the source document within the s3 bucket. - text (
string): full document content in Markdown format. - publisher (
string): Publisher or source of the document. - title (
string): Title of the document. - authors (
string): Author(s) of the document. - url (
string): Source URL. - year (
int64): Year of publication. - license (
string): Document license information.
Dataset Summary
EVE-Corpus aggregates high-quality EO-related text covering topics such as:
- Satellite missions
- Remote sensing techniques
- Climate and atmospheric science
- Land, ocean, and cryosphere monitoring
- Environmental modelling and geospatial analytics
All documents have been deduplicated, cleaned, and anonymized to remove personal information.
Data Distribution
The dataset includes 4.2B tokens from more than 30 EO-related sources.
| Data Source | Tokens | Ratio (%) |
|---|---|---|
| MDPI | 1.5 B | 35.0 |
| Copernicus | 885 M | 20.5 |
| NCBI | 710 M | 16.5 |
| Wiley | 199 M | 4.6 |
| ISPRS | 158 M | 3.7 |
| IEEE | 147 M | 3.42 |
| ArXiv | 168 M | 3.9 |
| NASA | 72 M | 1.7 |
| Elsevier | 70.6 M | 1.63 |
| Cambridge Press | 51.4 M | 1.2 |
| Springer | 40 M | 0.9 |
| AMS | 34.6 M | 0.8 |
| Intech Open | 14.3 M | 0.33 |
| Taylor and Francis | 18.6 M | 0.4 |
| SAGE | 8.2 M | 0.1 |
| IOP Science | 5.5 M | 0.1 |
| Frontiers | 4.8 M | 0.11 |
| EOS | 4 M | 0.09 |
| ESA | 116 M | 2.7 |
| ESA EO | 1.1 M | 0.02 |
| EGUP | 1.1 M | 0.002 |
| EOGE | 1.1 M | 0.02 |
| Miscellaneous | 1.2 M | 0.02 |
| UK Met Office | 1.5 M | 0.03 |
| Earth Data Science | 417 K | 0.01 |
| NASA CMR | 513 K | 0.01 |
| Oxford Academic | 689 K | 0.01 |
| Wikipedia | 630 K | 0.01 |
| EUMETSAT | 927 K | 0.02 |
| Imperative Space | 74.5 K | 0.002 |
| JAXA | 10 K | 0.0002 |
| Open Night Light | 62 K | 0.001 |
| SEOS | 77.7 K | 0.001 |
| MIT | 216 K | 0.005 |
| Total | 4.2 B | 100 |
Source Collection
The dataset was constructed using this scraping pipeline.
Preprocessing Pipeline
The dataset was processed using the following pipeline:
Extraction
- Supports PDF, HTML, XML, Markdown, and nested folder structures.
- Automatically detects file formats unless explicitly specified.
Deduplication
- Performs exact matching using SHA-256 checksums.
- Supports LSH-based near-duplicate detection (configurable: shingle size, permutations, similarity threshold).
Cleaning
- Removes irregularities and noise artifacts.
- Corrects LaTeX equations and tables using LLM assistance.
PII Removal
- Automatically masks names and emails using the Presidio framework.
Metadata Extraction
- Extracts Title, Authors, DOI, URL, Year, Journal, and Citation Count from scientific papers.
Export
- Saves processed content in multiple formats (default: Markdown).
Citation
If you use this dataset, please cite EVE:
@misc{eve2025,
title={EVE: Earth Virtual Expert},
author={ESA},
year={2025},
publisher={HuggingFace},
url={https://huggingface.co/eve-esa/eve_v0.1}
}
- Downloads last month
- 25